View unanswered posts | View active topics It is currently Fri Nov 28, 2014 9:10 pm






Reply to topic  [ 7 posts ] 
[resolved:] index of struct array, for-loop: **ERROR** 
Author Message
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post [resolved:] index of struct array, for-loop: **ERROR**
I always get compiler errors in cases when I am incrementing structure array indices, e.g.
Code:
task RefreshInputLayer()
{
int j;
  while(true){
     for (j=0;j<=1;j++)
     {
      Neuron0[j].in[0]=(float)SensorValue(0); // Input 0: Touch-Sensor an S1=0
      Neuron0[j].in[1]=(float)SensorValue(1); // Input 1: Touch-Sensor an S2=1
    }
  }
  return;
}

compiler error:
**Error**:Internal. Bad temp index in releasing temporary. 20(float). Allocation Index 0/-175Pass/Seq: Emit Code:136

if I wrote

Code:
task RefreshInputLayer()

  // int j;
  while(true){

     {
      Neuron0[0].in[0]=(float)SensorValue(0); // Input 0: Touch-Sensor an S1=0
      Neuron0[0].in[1]=(float)SensorValue(1); // Input 1: Touch-Sensor an S2=1
      Neuron0[1].in[0]=(float)SensorValue(0); // Input 0: Touch-Sensor an S1=0
      Neuron0[1].in[1]=(float)SensorValue(1); // Input 1: Touch-Sensor an S2=1
    }
  }
  return;
}


there's no error.
the errors occur always, if there's a variable count in a for-loop

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Last edited by Ford Prefect on Tue Jun 10, 2008 5:54 am, edited 4 times in total.



Sat May 17, 2008 7:26 am
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
current RobotC version 1.30 beta2
new downloaded
filename: ROBOTCforMindstorms130_BETA_2.exe
version 1.30.1
15910450 Bytes
Sunday, May 18 2008, 10:37:17 GMT-2


I get same errors with this newer RobotC version (?) as reported above:


121 **Error**:Internal. Bad temp index in releasing temporary. 20(float). Allocation Index 1/-129Pass/Seq: Emit Code:20
122 **Error**:Internal. Bad temp index in releasing temporary. 20(float). Allocation Index 1/-129Pass/Seq: Emit Code:24
123 **Error**:Internal. Bad temp index in releasing temporary. 20(float). Allocation Index 1/-129Pass/Seq: Emit Code:28
125 **Error**:Internal. Bad temp index in releasing temporary. 20(float). Allocation Index 1/-129Pass/Seq: Emit Code:32

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Last edited by Ford Prefect on Sat May 24, 2008 6:56 am, edited 1 time in total.



Thu May 22, 2008 3:20 am
Profile
Site Admin
Site Admin

Joined: Wed Jan 24, 2007 10:42 am
Posts: 614
Post 
There's a new build that will be coming later on this evening (still 6-8 hours away) that should have these issues fixed... I will post the build as soon as it is finished, so please do not post 10 times asking where it is.

_________________
Timothy Friez
ROBOTC Developer - SW Engineer
tfriez@robotc.net


Thu May 22, 2008 11:09 am
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
re: ROBOTC 1.30 BETA 4 Release
re: ROBOTC 1.35 BETA 1 Release
re: ROBOTC 1.37 BETA 1 Release

Image
the error still occurs ( not always at the 1st array function call, but afterwards in the following arrays)

does meanwhile any bug fix exist?

EDIT: with 1.38 it's fine by now! Image
Code:
 
// Lernfaehiges 2-schichtiges Neuronales Netz
// Backpropagation Netz mit 3 Sensor-Eingaengen (Touch an S1, S2, S3)
// an 3 verdeckten Neurons
// dann 2 Ausgabe-Neurons mit 2 Outputs (Anzeige auf dem Display)
// (c) H. W. 2008
   string Version="0.435 gamma";


#define printXY nxtDisplayStringAt
#define println nxtDisplayTextLine


//**********************************************************************
// Basisdeklarationen fuer Neuronale Netze
//**********************************************************************


const int L0 =  3;    // max. Neurons in Schicht (Layer) 0 (Schicht fuer Inputs)
const int L1 =  2;    // max. Neurons in Schicht (Layer) 1 (Ausgabe Schicht bei 2-schichtigen)
const int L2 =  1;    // max. Neurons in Schicht (Layer) 2
const int L3 =  1;    // max. Neurons in Schicht (Layer) 3


const int ni = 3;      // max. Dendriten-Eingaenge (Zaehler ab 0)
float lf     = 0.6;    // Lern-Faktor

int key;               // gedrueckte NXT-Taste
string MenuText="";    // Menue-Steuerung

float sollOut=0;


//**********************************************************************
// Neuron-Struktur (vereinfachte Version)
//**********************************************************************

typedef struct{
   float in[ni];    // Einzel-Inputs (Dendriten)
   float w[ni];     // Einzel-Wichtungen (jedes Dendriten)
   float net;       // totaler Input
   float th;        // Schwellenwert (threshold)
   float d;         // delta=Fehlersignal
   float out;       // Output (Axon): z.B. 0 oder 1
} tNeuron;

//**********************************************************************

tNeuron Neuron0[L0];  // Neurons-Schicht 0  (Schicht fuer Inputs)
tNeuron Neuron1[L1];  // Neurons-Schicht 1  (Ausgabe Schicht bei 2-schichtigen)
tNeuron Neuron2[L2];  // Neurons-Schicht 2
tNeuron Neuron3[L3];  // Neurons-Schicht 3


//**********************************************************************
//  mathematische Hilfsfunktionen
//**********************************************************************


float tanh(float x)  // Tangens hyperbolicus
{
   float e2x;
   e2x=exp(2*x);
   return((e2x-1)/(e2x+1));
}

//**********************************************************************
// Ein-/ Ausgabefunktionen (Tatstatur, Display)
//**********************************************************************

int buttonPressed(){

  TButtons nBtn;
  nNxtExitClicks=4; // gegen versehentliches Druecken

  nBtn = nNxtButtonPressed; // check for button press
   switch (nBtn)     {
      case kLeftButton: {
           return 1;   break;     }

        case kEnterButton: {
             return 2;   break;     }

        case kRightButton: {
             return 3;   break;     }

        case kExitButton: {
             return 4;   break;     }

        default: {
             return 0;   break;     }
   }
   return 0;
}

//*****************************************

int getkey() {
   int k, buf;

   k=buttonPressed();
   buf=buttonPressed();
  while (buf!=0)
  { buf=buttonPressed(); }
  return k;
}

//**********************************************************************

task DisplayValues(){
  int i;  // input number  = Sensoren an verdeckter Schicht
  int j;  // neuron number = Outputs an Ausgabe-Schicht

   while(true) {

    printXY( 0, 55, " out(0) | out(1)");
    printXY(48, 47, "|");
    printXY(48, 39, "|");
    printXY(48, 31, "|");

    printXY( 0, 63, "IN:");                        //  OBEN
    printXY(15, 63, "%2.0f", Neuron0[0].in[0]);    //  Inputs nebeneinander
    printXY(30, 63, "%2.0f", Neuron0[0].in[1]);
    printXY(45, 63, "%2.0f", Neuron0[0].in[2]);

                                                   //  LINKER BEREICH
    printXY(00,    47, "%3.1f", Neuron1[0].w[0]);  //  Wichtungen Ausgabe-Schicht
      printXY(12,    39, "%3.1f", Neuron1[0].w[1]);  //  (Neuron 0)
      printXY(24,    47, "%3.1f", Neuron1[0].w[2]);

                                                     //  RECHTER BEREICH
      printXY(00+53, 47, "%3.1f", Neuron1[1].w[0]);  //  Wichtungen Ausgabe-Schicht
      printXY(12+53, 39, "%3.1f", Neuron1[1].w[1]);  //  (Neuron 1)
    printXY(24+53, 47, "%3.1f", Neuron1[1].w[2]);

                                                   //  MITTE-UNTEN LINKS
    printXY(18,    31, "%5.2f", Neuron1[0].th);      //  Schwellwert Ausgabe-Neuron 0

    printXY( 0, 31, "th=");                        //  MITTE-UNTEN RECHTS
      printXY(18+53, 31, "%5.2f", Neuron1[1].th);    //  Schwellwert Ausgabe-Neuron 1

      printXY( 0, 23, "OUT");                        //  UNTEN RECHTS (unter Ausgabeschicht)
      printXY(48,    23, "%3.1f", Neuron1[0].out);   //  1. Output
      printXY(48+25, 23, "%3.1f", Neuron1[1].out);   //  2. Output


                                                   //  GANZ UNTEN
    println(7, "%s", MenuText);                    //  Menue-Zeile fuer Tastatur-Steuerung


  }
  return;
}

//**********************************************************************

void Pause() {
   while(true) wait1Msec(50);
}


//**********************************************************************
// File I/O
//**********************************************************************
const string sFileName = "Memory.dat";

TFileIOResult nIoResult;
TFileHandle   fHandle;

int   nFileSize     = (L0 + L1 + L2 + L3 +1)*100;


void SaveMemory()
{
   int i, j;

   CloseAllHandles(nIoResult);
   println(6,"%s","Save Memory...");
   wait1Msec(500);
   PlaySound(soundBeepBeep);
   wait1Msec(11);

   Delete(sFileName, nIoResult);

  OpenWrite(fHandle, nIoResult, sFileName, nFileSize);
  if (nIoResult==0) {
    eraseDisplay();

    for (j=0;j<L0;j++)   {
      for (i=0; i<ni;i++)
      {   WriteFloat (fHandle, nIoResult, Neuron0[j].w[i]); }
        WriteFloat (fHandle, nIoResult, Neuron0[j].th);     }

    for (j=0;j<L1;j++)   {
      for (i=0; i<ni;i++)
      {   WriteFloat (fHandle, nIoResult, Neuron1[j].w[i]); }
        WriteFloat (fHandle, nIoResult, Neuron1[j].th);     }


    Close(fHandle, nIoResult);
    if (nIoResult==0) {
       PlaySound(soundUpwardTones);
       println(6,"%s","Save Memory: OK"); }
    else {
       PlaySound(soundException);
       println(6,"%s","Save Memory: ERROR"); }
  }
  else PlaySound(soundDownwardTones);

}

//*****************************************

void RecallMemory()
{
  int i, j;
   println(6,"%s","Recall Memory");
  CloseAllHandles(nIoResult);
   wait1Msec(500);
   PlaySound(soundBeepBeep);
   wait1Msec(11);

   OpenRead(fHandle, nIoResult, sFileName, nFileSize);
  if (nIoResult==0) {

  for (j=0;j<L0;j++) {
     for (i=0; i<ni;i++)
     { ReadFloat (fHandle, nIoResult, Neuron0[j].w[i]); }
       ReadFloat (fHandle, nIoResult, Neuron0[j].th);     }

  for (j=0;j<L1;j++) {
     for (i=0; i<ni;i++)
     { ReadFloat (fHandle, nIoResult, Neuron1[j].w[i]); }
       ReadFloat (fHandle, nIoResult, Neuron1[j].th);     }


    Close(fHandle, nIoResult);
    if (nIoResult==0) PlaySound(soundUpwardTones);
    else {
       PlaySound(soundException);
       println(6,"%s","Recall: ERROR"); }
  }
  else PlaySound(soundDownwardTones);
  eraseDisplay();

}


//**********************************************************************
// Funktionen des neuronalen Netzes
//**********************************************************************

//**********************************************************************
// Inputs
//**********************************************************************

task RefreshInputLayer(){  // Inputs aus Sensorwerten
int i, j;
  while(true){
  for (j=0; j<L0; j++) {   // alle Inputs an alle Eingangs-Neuronen
    for (i=0; i<ni; i++)   {
      Neuron0[j].in[i]=(float)SensorValue(i);
      }
    }
  }
  return;
}

//*****************************************

void SetInputPattern(int m) // 3 Inputs virtuell generiert
{
   int i,j,v;

  for (i=0; i<m; i++)
  {
     v=random(1);
     for (j=0; j<L0;j++)
    {
        Neuron0[j].in[i]=(float)v;
    }
  }
}

//**********************************************************************
// Propagierungsfunktionen: Eingaenge gewichtet aufsummieren (in -> net)
//**********************************************************************

void netPropag(tNeuron &neur){      // Propagierungsfunktion 1
  int i=0;                          // kalkuliert den Gesamt-Input (net)
  float s=0;

  for(i=0;i<ni;i++){
     s+= (neur.in[i]*neur.w[i]);     // gewichtete Summe
  }
  neur.net=s;
}

void netPropagThr(tNeuron &neur){   // Propagierungsfunktion 2
  int i=0;                          // kalkuliert den Gesamt-Input (net)
  float s=0;                        // und beruecksichtigt Schwellwert

  for(i=0;i<ni;i++){
     s+= (neur.in[i]*neur.w[i]);     // gewichtete Summe
  }
  neur.net=s-neur.th;               // abzueglich Schwellwert
}

//**********************************************************************
// Aktivierungsfunktionen inkl. Ausgabe (net -> act -> out)
//**********************************************************************


void act_01(tNeuron &neur){         // Aktivierungsfunktion 1 T1: x -> [0; +1]
   if (neur.net>=0)                  // 0-1-Schwellwertfunktion
      {neur.out=1;}                  // Fkt.-Wert: 0 oder 1
   else {neur.out=0;}
}

void actIdent(tNeuron &neur){       // Aktivierungsfunktion 2 T2: x -> x
   neur.out=neur.net;                // Identitaets-Funktion
}                                   // Fkt-Wert: Identitaet


void actFermi(tNeuron &neur){       // Aktivierungsfunktion 3 T3: x -> [0; +1]
   float val;                       // Fermi-Fkt. (Logistisch, differenzierbar)
   float c=3.0;                     // c= Steilheit, bei c=1: flach,
  val= (1/(1+(exp(-c*neur.net))));  // c=10: Sprung zwischen x E [-0.1; +0.1]
  neur.out=val;
}

void actTanH(tNeuron &neur){        // Aktivierungsfunktion 4 T4: x -> [-1; +1]
   float val;                       // Tangens Hyperbolicus, differenzierbar
   float c=2.0;                     // c= Steilheit, bei c=1: flach
  val= tanh(c*neur.net);            // c=3: Sprung zwischen x E [-0.1; +0.1]
  neur.out=val;
}



//**********************************************************************
// Reset / Init
//**********************************************************************

void ResetNeuron(tNeuron &neur, int rand){ // alles auf Null bzw. randomisiert
   int i;



   for (i=0; i<ni; i++) {
      neur.in[i]=0;                   // Einzel-Input (Dendrit)
     if (rand==0) {neur.w[i]=0;}     // Einzel-Wichtung (Dendrit)=0
      else
      neur.w[i]=0.0+random(5)*0.2;   // Einzel-Wichtung (Dendrit) randomomisiert

   }
   neur.net=0;                       // totaler Input
   if (rand==0) {neur.th=0;}         // Schwellenwert (threshold)=0
   else
   neur.th=0.0+random(3)*0.2;        // Schwellenwert (threshold) randomomisiert

   neur.out=0;                       // errechneter Aktivierungswert=Output
   }

//*****************************************

void InitAllNeurons(){             // alle Netz-Neurons resetten
   int j;                           // (0 oder randomisiert)

  for (j=0; j<L0; j++) {           // Neuron-Schicht 0
        ResetNeuron(Neuron0[j],1);}

  for (j=0; j<L1; j++) {           // Neuron-Schicht 1
        ResetNeuron(Neuron1[j],1);}

  for (j=0; j<L2; j++) {           // Neuron-Schicht 2
        ResetNeuron(Neuron2[j],1);}

  for (j=0; j<L3; j++) {           // Neuron-Schicht 3
        ResetNeuron(Neuron3[j],1);}
}

//*****************************************

void PrepThisNeuralNet()  // for testing
{
   ; // defaults
}


//**********************************************************************
// einzelne Neurons schichtenweise durchrechnen
//**********************************************************************

task RefreshLayers(){
  int j, k;

  while(true){

     for (j=0;j<L0;j++) {
       netPropagThr(Neuron0[j]);    // net-Input Layer 0
      actFermi(Neuron0[j]);        // Aktivierung T: Fermi-Funktion -> out
      for (k=0;k<L1;k++) {
        Neuron1[k].in[j] = Neuron0[j].out; } // Synapse Neuron0->Neuron1
    }

    for (j=0;j<L1;j++) {
      netPropagThr(Neuron1[j]);    // net-Input Layer 1
      actFermi(Neuron1[j]);        // Aktivierung T: Fermi-Funktion -> out
    }

  }
  return;
}

//**********************************************************************
// Lernverfahren
//**********************************************************************


void LearnPerceptronRule() {         // Perceptron-Lern-Modus
  int ErrorCount;
  int m,n,o;  // Sensor-Kombinationen
  int i;      // Anzahl Inputs
  int j;      // Anzahl Ausgabe-Neurons

 do {
  ErrorCount=0;
  PlaySound(soundBeepBeep);
  MenuText="-- <<  ok  >> ++";

  for (m=0; m<2; m++)    {
    for (n=0; n<2; n++)   {
     for (o=0; o<2; o++)   {
     SetInputPattern(ni);           // virtuelles Muster praesentieren
     wait1Msec(200);

     for (j=0;j<2;j++)
     {

       sollOut=0;
       MenuText="-- <<  ok  >> ++";
       printXY(0,15, "soll:");
       printXY(48+(j*25),15,"%2.0f", sollOut);
      do                        // erzeugten Output berichtigen
      {
         key=getkey();

         if (key==1) {   if (sollOut>0) sollOut-=1;  }
         else
         if (key==3) { if (sollOut< 1) sollOut+=1;  }
        printXY(0,15, "soll:");
         printXY(48+(j*25),15,"%2.0f", sollOut);
        wait1Msec(100);
      } while ((key!=2)&&(key!=4));

      println(5, " ");

      //...................................................
      if (key==4) {                     // Lern-Modus ENDE
         PlaySound(soundException);
         key=0;
         return;
      }
      //....................................................

                                        // Lern-Modus START
      //....................................................
      if (sollOut==Neuron0[j].out    )  // teachOut korrekt
        {
            PlaySound(soundBlip);
         PlaySound(soundBlip);
         wait1Msec(100);
      }
        //....................................................
      if (sollOut!=Neuron0[j].out)      // teachOut falsch
        {
           PlaySound(soundException);
           wait1Msec(100);
        ErrorCount+=1;
           //...................................................
                                        // LERNEN

        for (i=0; i<=L0; i++)           // fuer alle i (Inputs)
           {                               // Wichtungen anpassen (Delta-Regel)
              Neuron0[j].w[i] = Neuron0[j].w[i]+ (lf *Neuron0[j].in[i]*(sollOut-Neuron0[j].out));
           }

           if (sollOut!=Neuron0[j].out)    // Schwelle anpassen (Delta-Regel, erweitert)
           {
              Neuron0[j].th = Neuron0[j].th - (lf *(sollOut-Neuron0[j].out));
           }
        //...................................................
      } // if (sollOut!=Neuron0[j].out)

     } // for j

    } // for o
   } // for n
  } // for m
 } while (ErrorCount>0);

PlaySound(soundUpwardTones);
PlaySound(soundUpwardTones);
}

//**********************************************************************


void LearnBackpropagation() {    // Backpropagation-Lern-Modus
                                 // 1 Eingabe-/verdeckte(L0) +  1 Ausgabe-Schicht(L1)
  int ErrorCount;
  int m,n,o;      // Sensor-Kombinationen
  int i;          // Zaehler Inputs
  int j;          // Zaehler Ausgabe-Neurons
  int s;          // Zaehler verdeckte Neurons


  float f_sig1;   // Fehler-Signal Schicht 1 zum Lernen v. Wichtung und Schwelle
  float f_sig0;   // Fehler-Signal Schicht 0 zum Lernen v. Wichtung und Schwelle

  float f_sum=0;  // Fehler verdeckte Schicht (Summe (Wichtung*Fehlersignal))
  float f_out;    // Fehlerwert sollOut-out
  float fehler=0; // Summe der Fehlersignal-Quadrate

  float delta_w;  // Aenderungswert der Wichtung
  float delta_th; // Aenderungswert des Schwellwerts


 do {
  ErrorCount=0;
  PlaySound(soundBeepBeep);
  MenuText="-- <<  ok  >> ++";

  for (m=0;m<8;m++){
     SetInputPattern(ni);       // virtuelles Muster praesentieren

     wait1Msec(200);             // durch alle Schichten durchrechnen


     for (j=0;j<L1;j++)  // 2 Outputs werden trainiert
     {

       sollOut=0;   // 0
       MenuText="-- <<  ok  >> ++";
       printXY(0,15, "soll:");
       printXY(48+(j*25),15,"%2.0f", sollOut);

      do                             // erzeugten Output berichtigen
      {
         key=getkey();

         if (key==1) {   if (sollOut==1) sollOut=0;  }
         else
         if (key==3) { if (sollOut==0) sollOut=1;  }

         printXY(0,15, "soll:");
         printXY(48+(j*25),15,"%2.0f", sollOut);
        wait1Msec(100);
      } while ((key!=2)&&(key!=4));

      println(6, " ");

      //...................................................
      if (key==4) {                     // Lern-Modus ENDE
         PlaySound(soundException);
         key=0;
         return;
      }  // if key
      //....................................................

                                        // Lern-Modus START
      //....................................................
      if (sollOut==Neuron1[j].out  )    // teachOut korrekt
        {
            PlaySound(soundBlip);
         PlaySound(soundBlip);
         wait1Msec(100);
      }  //
        //....................................................
      if (sollOut!=Neuron1[j].out)      // teachOut falsch
        {
           PlaySound(soundException);
           wait1Msec(100);
        ErrorCount+=1;


                                        //                LERNEN
                                        // 1. Schritt: Ausgabeschicht behandeln L1(j)
                                        // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        f_out=(sollOut-Neuron1[j].out);                        // Fehlerwert
        f_sig1=Neuron1[j].out*(1-Neuron1[j].out)*f_out;        // Fehler-Signal fuer Ausgabeschicht...

        Neuron1[j].d=f_sig1;             // ...im Neuron1 speichern

        fehler=fehler+(f_out*f_out);    // Gesamtfehler aller Ausgabeneuronen

        delta_w = lf  * Neuron1[j].out * f_sig1;        // Aenderungswert fuer Wichtungen
        for (i=0;i<L0;i++)
        {  Neuron1[j].w[i]=Neuron1[j].w[i] + delta_w;}  // neue Wichtungen in Ausgabe-Schicht L1(j)

        delta_th = lf  * f_sig1;                    //  Aenderungswert delta_th
        Neuron1[j].th  = Neuron1[j].th + delta_th;  // neue Schwellenwerte in Ausgabe-Schicht L1(j)



                                        //                LERNEN
        for (s=0;s<L1;s++)              // 2. Schritt: verdeckte Schicht behandeln L0(s)
                                        // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        {

           f_sum=0;
           for (i=0;i<L1;i++)
           { f_sum=f_sum+(Neuron0[s].w[i] * f_sig1);  }       // Summe ueber alle (Wichtungen(L0)*FSignale(L1)*)

           f_sig0 = Neuron0[s].out*(1-Neuron0[s].out)*f_sum;  // neues Fehlersignal (verdeckte Schicht)...

           Neuron0[s].d = f_sig0;                             // ...im Neuron0 speichern

           delta_w = lf  * Neuron0[s].out * f_sig1;       // Aenderungswert mit Nachfolger-Fehlersignal
           for (i=0;i<L1;i++)
           { Neuron0[s].w[i]=Neuron0[s].w[i] + delta_w; } // neue Wichtungen in verdeckter Schicht L0(s)


           delta_th = lf  * f_sig0;                       // Aenderungswert delta_th
           Neuron0[s].th  = Neuron0[s].th  + delta_th;    // neue Schwellenwerte in verdeckter Schicht L0(s)


        } // for (s=0;s<L1;s++)


           //...................................................
      } // if (sollOut!=Neuron1[j].out)

     } // for j
   } // for m


 } while (ErrorCount>0);

PlaySound(soundUpwardTones);
PlaySound(soundUpwardTones);
}






//**********************************************************************
// Programmablauf-Steuerung, Menues
//**********************************************************************

int Menu_Recall() {
  eraseDisplay();
  MenuText="<Recall    Clear>";
  println(7, "%s", MenuText);
  println(0, "%s", " Hal "+Version);
  println(1, "%s", "----------------");
  println(2, "%s", "Reload my brain -");
  println(4, "%s", " Total Recall ?");
  do
  {
     key=getkey();
     if (key==1)    {  return 1;   }
     if (key==2)    {  PlaySound(soundException);   }
     if (key==3)    {  return 3;   }
     if (key==4)    {  PlaySound(soundException); }

     wait1Msec(100);
  }
  while ((key==0)||(key==2)||(key==4));
}



int Menu_LearnSaveRun() {
  eraseDisplay();
  MenuText="<Learn  Sav  Run>";
  do
  {
     key=getkey();
     if (key==1)    {  return 1;   }
     if (key==2)    {  SaveMemory(); }
     if (key==3)    {  return 3;   }
     if (key==4)    {  PlaySound(soundException); }

     wait1Msec(100);
  }
  while ((key==0)||(key==2)||(key==4));
}

//**********************************************************************
// Hauptprogramm
//**********************************************************************
int choice;


task main(){
  SensorType(S1)=sensorTouch;
  SensorType(S2)=sensorTouch;
  SensorType(S3)=sensorTouch;

  nVolume=2;
  InitAllNeurons();
  PrepThisNeuralNet();

  choice=Menu_Recall();
  if (choice==1)  { RecallMemory(); } // altes Gedaechtnis laden

  StartTask (DisplayValues);
  StartTask (RefreshLayers);

  while(true)
  {
    choice=Menu_LearnSaveRun();
    if (choice==1)
    {
       StopTask(RefreshInputLayer);
       LearnBackpropagation();          // Lern-Modus
    }
    MenuText="Menue: [ESC]";
    PlaySound(soundFastUpwardTones);
    StartTask (RefreshInputLayer);    // Run-Modus
    do
    {
       key=getkey();
      wait1Msec(100);
    } while (key!=4);
  }

}




**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 0/-131Pass/Seq: Emit Code:20
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 0/-131Pass/Seq: Emit Code:22
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 1/-131Pass/Seq: Emit Code:26
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 1/-131Pass/Seq: Emit Code:30
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 1/-131Pass/Seq: Emit Code:34
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 1/-131Pass/Seq: Emit Code:38
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 1/-131Pass/Seq: Emit Code:42
**Error**:Internal. Bad temp index in releasing temporary. 30(short). Allocation Index 1/-136Pass/Seq: Emit Code:50
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 0/-136Pass/Seq: Emit Code:54
**Error**:Internal. Bad temp index in releasing temporary. 30(short). Allocation Index 1/-141Pass/Seq: Emit Code:64
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 0/-141Pass/Seq: Emit Code:68
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 0/-223Pass/Seq: Emit Code:180
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 0/-223Pass/Seq: Emit Code:182

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Last edited by Ford Prefect on Sun Jun 22, 2008 7:44 am, edited 12 times in total.



Fri May 23, 2008 2:33 pm
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
Hint for the de-velo-buggers:

the bug always occurs with an array of struct element, if the index is a variable and a sub-structure is explicitely used in a for-loop, like
for (j=0;j<nl0;j++) {
Neuron0[j].th =...
Neuron0[j].in[0] =... }
in addition, the first for-loop call sometimes is fine, while all of the following are faulty:

Code:
void SaveMemory() {
//...
for (j=0;j<nl0;j++)
    {
    // ... 
    WriteFloat (fHandle, nIoResult, Neuron0[j].th);  //  [ERROR !]

    }
// ...
}


void SetInputPattern(int m, int n, int o)
{
   int j;
   for (j=0; j<nl0;j++)
  {
     Neuron0[j].in[0]=(float)m;
     Neuron0[j].in[1]=(float)n;  //  [ERROR !]
     Neuron0[j].in[2]=(float)o;  //  [ERROR !]
   }
}



if just a complete structure is passed to a function (in a for-loop), there is no error, even if the structure element HAS a variable index, like:
for (j=0;j<nl0;j++) {
netPropagThr(Neuron0[j]); }
Code:
task RefreshLayers(){
  int j;
  while(true){
    for (j=0;j<nl0;j++) {
       netPropagThr(Neuron0[j]);  // gewichtete Summe Layer 0 abzgl. Schwellwert
      act_01(Neuron0[j]);        // Aktivitaet per Schwellwert-Funktion
    }
  }
  return;
}


CRAZY:
in this specific case there is NO ERROR:

Code:
task RefreshInputLayer(){  // Inputs sollen sehr schnell erfasst werden, daher als eigener Task
int i, j;
  while(true){
  for (j=0; j<nl0; j++) {
    for (i=0; i<ni; i++)   {
      Neuron0[j].in[i]=(float)SensorValue(i); // Input 0: Touch-Sensor an S1=0
      }
    }
  }
  return;
}

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Thu Jun 05, 2008 5:24 am
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
Image

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Fri Jun 06, 2008 5:27 pm
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
with 1.38 everything is fine! Image

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Sun Jun 08, 2008 8:05 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 7 posts ] 

Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  



Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group.
Designed by ST Software for PTF.