View unanswered posts | View active topics It is currently Fri Oct 31, 2014 11:21 pm






Reply to topic  [ 10 posts ] 
ROBOTC 1.35 BETA 1 Release 
Author Message
Site Admin
Site Admin

Joined: Wed Jan 24, 2007 10:42 am
Posts: 613
Post ROBOTC 1.35 BETA 1 Release
ROBOTC for Mindstorms 1.35 BETA 1 has been released. This version contains most of the bug fixed posted on the Bug Tracker.

Changelog coming later.

http://www.robotc.net/downloads/ROBOTCf ... BETA_1.exe
(correct link this time... sorry guys!)

Update your firmware with Version 0743 before using v1.35

_________________
Timothy Friez
ROBOTC Developer - SW Engineer
tfriez@robotc.net


Last edited by tfriez on Wed Jun 04, 2008 4:35 pm, edited 2 times in total.



Mon Jun 02, 2008 12:19 pm
Profile
Rookie

Joined: Sun Aug 12, 2007 3:18 pm
Posts: 38
Post 
Is this version for NXXT or for IFI?


Mon Jun 02, 2008 12:56 pm
Profile WWW
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
The activation routine is for IFI :shock:
.
.
.
and the compiler, too:
**Error**:'string' type variables not supported on platform
**Error**:'float' type variables not supported on platform
:shock:
.
.
.
:shock: ROBOTCforIFI_135_ : Image

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Last edited by Ford Prefect on Mon Jun 02, 2008 3:55 pm, edited 3 times in total.



Mon Jun 02, 2008 1:52 pm
Profile
Site Admin
Site Admin

Joined: Wed Jan 24, 2007 10:42 am
Posts: 613
Post 
Sorry guys! Posted the wrong build... I updated the original post.

_________________
Timothy Friez
ROBOTC Developer - SW Engineer
tfriez@robotc.net


Mon Jun 02, 2008 3:12 pm
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
:evil:
there is still the same bug:
Code:
   for (j=0;j<=1;j++) {
         printXY(15+(j*50), 63, "%2.0f", Neuron0[j].in[0]); // ok
         printXY(26+(j*50), 63, "%2.0f", Neuron0[j].in[1]); // ERROR
         printXY(37+(j*50), 63, "%2.0f", Neuron0[j].in[2]); // ERROR

         printXY(00+(j*53), 55, "%3.1f", Neuron0[j].w[0]); // ERROR
         printXY(12+(j*53), 47, "%3.1f", Neuron0[j].w[1]); // ERROR
         printXY(24+(j*53), 55, "%3.1f", Neuron0[j].w[2]); // ERROR

         printXY(25+(j*45), 39, "%3.1f", Neuron0[j].th); // ERROR

         printXY(25+(j*45), 31,"%2.0f",Neuron0[j].out); // ERROR
    }

**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 1/-131Pass/Seq: Emit Code:24
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 1/-131Pass/Seq: Emit Code:28
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 1/-131Pass/Seq: Emit Code:32
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 1/-131Pass/Seq: Emit Code:36
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 1/-131Pass/Seq: Emit Code:40
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 1/-131Pass/Seq: Emit Code:44
**Error**:Internal. Bad temp index in releasing temporary. 30(float). Allocation Index 1/-131Pass/Seq: Emit Code:48


complete code:
Code:
// Lernfaehiges Neuronales Netz
// Feed-Forward Netz mit 3 Sensor-Eingaengen (Touch an S1, S2, S3)
// und 2 Ausgabe-Neuronen (Anzeige auf dem Display)
// (c) H. W. 2008
// neu: Vorbereitungen fuer mehrschichtige Netze & Backpropagation
   string Version="0.410";

#define printXY nxtDisplayStringAt
#define println nxtDisplayTextLine


//**********************************************************************
// Basisdeklarationen fuer Neuronale Netze
//**********************************************************************


const int nl0 =  2;    // max. Neuronen in Schicht (Layer) 0
const int nl1 =  1;    // max. Neuronen in Schicht (Layer) 1
const int nl2 =  1;    // max. Neuronen in Schicht (Layer) 2
const int nl3 =  1;    // max. Neuronen in Schicht (Layer) 3


const int ni = 3;      // max. Dendriten-Eingaenge (Zaehler ab 0)
float lbd    = 0.2;    // Lern-Index / Faktor lambda

int key;               // gedrueckte NXT-Taste
string MenuText="";    // Menue-Steuerung

float sollOut=0;


//**********************************************************************
// Neuron-Struktur (vereinfachte Version)
//**********************************************************************

typedef struct{
   float in[ni];    // Einzel-Inputs (Dendriten)
   float w[ni];     // Einzel-Wichtungen (jedes Dendriten)
   float net;       // totaler Input
   float th;        // Schwellenwert (threshold)
   float d;         // delta=Fehlersignal
   float out;       // Output (Axon): z.B. 0 oder 1
} tNeuron;

//**********************************************************************

tNeuron Neuron0[nl0];  // Neuronen-Schicht 0
tNeuron Neuron1[nl1];  // Neuronen-Schicht 1
tNeuron Neuron2[nl2];  // Neuronen-Schicht 2
tNeuron Neuron3[nl3];  // Neuronen-Schicht 3


//**********************************************************************
//  mathematische Hilfsfunktionen
//**********************************************************************


float tanh(float x)  // Tangens hyperbolicus
{
   float e2x;
   e2x=exp(2*x);
   return((e2x-1)/(e2x+1));
}

//**********************************************************************
// Ein-/ Ausgabefunktionen (Tatstatur, Display)
//**********************************************************************

int buttonPressed(){

  TButtons nBtn;
  nNxtExitClicks=4; // gegen versehentliches Druecken

  nBtn = nNxtButtonPressed; // check for button press
   switch (nBtn)     {
      case kLeftButton: {
           return 1;   break;     }

        case kEnterButton: {
             return 2;   break;     }

        case kRightButton: {
             return 3;   break;     }

        case kExitButton: {
             return 4;   break;     }

        default: {
             return 0;   break;     }
   }
  return 0;
}

//*****************************************

int getkey() {
   int k, buf;

   k=buttonPressed();
   buf=buttonPressed();
  while (buf!=0)
  { buf=buttonPressed(); }
  return k;
}

//**********************************************************************

task DisplayValues(){
  int i;  // inputs = sensors
  int j;  // neuron number = outputs
   while(true) {

    printXY( 0, 63, "IN:");
                             printXY(48, 55, "|");
                             printXY(48, 47, "|");
    printXY( 0, 39, "th=");  printXY(48, 39, "|");
    printXY( 0, 31, "OUT");  printXY(48, 31, "|");



     for (j=0;j<=1;j++) {
         printXY(15+(j*50), 63, "%2.0f", Neuron0[j].in[0]);
         printXY(26+(j*50), 63, "%2.0f", Neuron0[j].in[1]);
         printXY(37+(j*50), 63, "%2.0f", Neuron0[j].in[2]);

         printXY(00+(j*53), 55, "%3.1f", Neuron0[j].w[0]);
         printXY(12+(j*53), 47, "%3.1f", Neuron0[j].w[1]);
         printXY(24+(j*53), 55, "%3.1f", Neuron0[j].w[2]);

         printXY(25+(j*45), 39, "%3.1f", Neuron0[j].th);

         printXY(25+(j*45), 31,"%2.0f",Neuron0[j].out);
    }


    // Menue-Zeilen fuer Tastatur-Steuerung

    println(7, "%s", MenuText);


  }
  return;
}

//**********************************************************************

void Pause() {
   while(true) wait1Msec(50);
}


//**********************************************************************
// File I/O
//**********************************************************************
const string sFileName = "Memory.dat";

TFileIOResult nIoResult;
TFileHandle   fHandle;

int   nFileSize     = (nl0+nl1+nl2+nl3+1)*100;


void SaveMemory()
{
   int i, j;

   CloseAllHandles(nIoResult);
   wait1Msec(500);
   PlaySound(soundBeepBeep);
   wait1Msec(11);

   Delete(sFileName, nIoResult);

  OpenWrite(fHandle, nIoResult, sFileName, nFileSize);
  if (nIoResult==0) {
    eraseDisplay();


    j=0;
    //for (j=0;j<nl0;j++)
    {
      for (i=0; i<ni;i++) {
         WriteFloat (fHandle, nIoResult, Neuron0[0].w[i]);
      }
      WriteFloat (fHandle, nIoResult, Neuron0[0].th);


      for (i=0; i<ni;i++) {
         WriteFloat (fHandle, nIoResult, Neuron0[1].w[i]);
      }
      WriteFloat (fHandle, nIoResult, Neuron0[1].th);
    }


    Close(fHandle, nIoResult);
    if (nIoResult==0) PlaySound(soundUpwardTones);
    else PlaySound(soundException);
  }
  else PlaySound(soundDownwardTones);

}

//*****************************************

void RecallMemory()
{
  int i, j;
   CloseAllHandles(nIoResult);
   wait1Msec(500);
   PlaySound(soundBeepBeep);
   wait1Msec(11);

   OpenRead(fHandle, nIoResult, sFileName, nFileSize);
  if (nIoResult==0) {

    j=0;
    //for (j=0;j<nl0;j++)
    {
      for (i=0; i<ni;i++) {
         ReadFloat (fHandle, nIoResult, Neuron0[0].w[i]);
      }
      ReadFloat (fHandle, nIoResult, Neuron0[0].th);


       for (i=0; i<ni;i++) {
         ReadFloat (fHandle, nIoResult, Neuron0[1].w[i]);
       }
       ReadFloat (fHandle, nIoResult, Neuron0[1].th);
    }

    Close(fHandle, nIoResult);
    if (nIoResult==0) PlaySound(soundUpwardTones);
    else PlaySound(soundException);
  }
  else PlaySound(soundDownwardTones);


  eraseDisplay();

}


//**********************************************************************
// Funktionen des neuronalen Netzes
//**********************************************************************
//**********************************************************************
// Propagierungsfunktionen: Eingaenge gewichtet aufsummieren (in -> net)
//**********************************************************************

void netPropag(tNeuron &neur){      // Propagierungsfunktion 1
  int i=0;                          // kalkuliert den Gesamt-Input (net)
  float s=0;

  for(i=0;i<ni;i++){
     s+= (neur.in[i]*neur.w[i]);     // gewichtete Summe
  }
  neur.net=s;
}

void netPropagThr(tNeuron &neur){   // Propagierungsfunktion 2
  int i=0;                          // kalkuliert den Gesamt-Input (net)
  float s=0;                        // und beruecksichtigt Schwellwert

  for(i=0;i<ni;i++){
     s+= (neur.in[i]*neur.w[i]);     // gewichtete Summe
  }
  neur.net=s-neur.th;               // abzueglich Schwellwert
}

//**********************************************************************
// Aktivierungsfunktionen inkl. Ausgabe (net -> act -> out)
//**********************************************************************


void act_01(tNeuron &neur){         // Aktivierungsfunktion 1 T1: x -> [0; +1]
   if (neur.net>=0)                  // 0-1-Schwellwertfunktion
      {neur.out=1;}                  // Fkt.-Wert: 0 oder 1
   else {neur.out=0;}
}

void actIdent(tNeuron &neur){       // Aktivierungsfunktion 2 T2: x -> x
   neur.out=neur.net;                // Identitaets-Funktion
}                                   // Fkt-Wert: Identitaet


void actFermi(tNeuron &neur){       // Aktivierungsfunktion 3 T3: x -> [0; +1]
   float val;                        // Fermi-Fkt. (Logistisch, differenzierbar)
   float c=3.0;                      // c= Steilheit, bei c=1: flach,
  val= (1/(1+(exp(-c*neur.net))));  // c=10: Sprung zwischen x E [-0.1; +0.1]
  neur.out=val;
}

void actTanH(tNeuron &neur){        // Aktivierungsfunktion 4 T4: x -> [-1; +1]
   float val;                        // Tangens Hyperbolicus, differenzierbar
   float c=2.0;                      // c= Steilheit, bei c=1: flach
  val= tanh(c*neur.net);            // c=3: Sprung zwischen x E [-0.1; +0.1]
  neur.out=val;
}



//**********************************************************************
// Reset / Init
//**********************************************************************

void ResetNeuron(tNeuron &neur){ // alles auf Null
   int i;

   for (i=0; i<ni; i++) {
      neur.in[i]=0;      // Einzel-Input (Dendrit)
     neur.w[i]=0;       // Einzel-Wichtung (Dendrit)
   }
   neur.net=0;          // totaler Input
   neur.th=0;           // Schwellenwert (threshold)
   neur.out=0;          // errechneter Aktivierungswert=Output
   }

//*****************************************

void InitAllNeurons(){              // alle Netz-Neuronen auf Null
   int n;

  for (n=0; n<nl0; n++) {           // Neuronen-Schicht 0
        ResetNeuron(Neuron0[n]);}
  for (n=0; n<nl1; n++) {           // Neuronen-Schicht 1
        ResetNeuron(Neuron1[n]);}
  for (n=0; n<nl2; n++) {           // Neuronen-Schicht 2
        ResetNeuron(Neuron2[n]);}
  for (n=0; n<nl3; n++) {           // Neuronen-Schicht 3
        ResetNeuron(Neuron3[n]);}
}

//*****************************************


void InitThisNeuralNet()
{
  ; // defaults
}


void PrepThisNeuralNet()  // for testing
{
   ; // defaults
}


//**********************************************************************
// Inputs
//**********************************************************************

task RefreshInputLayer(){  // Inputs sollen sehr schnell erfasst werden, daher als eigener Task
int j;
  while(true){

     {
      Neuron0[0].in[0]=(float)SensorValue(0); // Input 0: Touch-Sensor an S1=0
      Neuron0[0].in[1]=(float)SensorValue(1); // Input 1: Touch-Sensor an S2=1
      Neuron0[0].in[2]=(float)SensorValue(2); // Input 1: Touch-Sensor an S3=2

      Neuron0[1].in[0]=(float)SensorValue(0); // Input 0: Touch-Sensor an S1=0
      Neuron0[1].in[1]=(float)SensorValue(1); // Input 1: Touch-Sensor an S2=1
      Neuron0[1].in[2]=(float)SensorValue(2); // Input 1: Touch-Sensor an S3=2
    }
  }
  return;
}

//*****************************************

void SetInputPattern(int m, int n, int o)
{
   Neuron0[0].in[0]=(float)m; Neuron0[0].in[1]=(float)n; Neuron0[0].in[2]=(float)o;
   Neuron0[1].in[0]=(float)m; Neuron0[1].in[1]=(float)n; Neuron0[1].in[2]=(float)o;
}

//**********************************************************************
// einzelne Neuronen schichtenweise durchrechnen
//**********************************************************************

task RefreshLayers(){
  int j;
  while(true){
    for (j=0;j<nl0;j++) {
       netPropagThr(Neuron0[j]);  // gewichtete Summe Layer 0 abzgl. Schwellwert
      act_01(Neuron0[j]);        // Aktivitaet per Fermi-Funktion
    }
  }
  return;
}

//**********************************************************************
// Lernverfahren
//**********************************************************************


void LearnPerceptronRule() {         // Lern-Modus nach Delta-Regel
  int ErrorCount;
  int m,n,o;  // Sensor-Kombinationen
  int i;  // Inputs

  // int j;  // Anzahl Ausgabe-Neuronen

 do {
  ErrorCount=0;
  PlaySound(soundBeepBeep);
  MenuText="-- <<  ok  >> ++";

  for (m=0; m<2; m++)    {
    for (n=0; n<2; n++)   {
     for (o=0; o<2; o++)   {
     SetInputPattern(m,n,o);           // virtuelles Muster praesentieren
     wait1Msec(200);

     sollOut=Neuron0[0].out;   // 0


    // for (j=0;j<2;j++) {
    // hier funktioniert der Compiler nicht korrekt ( BUG ! !)
      // Zaehler fuer Neuron-Nr ergibt Compiler-Error!
      // daher Code-Duplizierung fuer jedes Ausgabeneuron einzeln!
      // hier fuer Neuron0[0]
       MenuText="-- <<  ok  >> ++";
       printXY(0,23, "soll:");
       printXY(25,23,"%2.0f", sollOut);
      do                        // erzeugten Output berichtigen
      {
         key=getkey();

         if (key==1) {   if (sollOut>0) sollOut-=1;  }
         else
         if (key==3) { if (sollOut< 1) sollOut+=1;  }
        printXY(0,23, "soll:");
         printXY(25,23,"%2.0f", sollOut);
        wait1Msec(100);
      } while ((key!=2)&&(key!=4));

      println(5, " ");

      //...................................................
      if (key==4) {                     // Lern-Modus ENDE
         PlaySound(soundException);
         key=0;
         return;
      }  // if key
      //....................................................

                                       // Lern-Modus START

      if (sollOut==Neuron0[0].out)
        {
            PlaySound(soundBlip);         // teachOut korrekt
         PlaySound(soundBlip);
         wait1Msec(100);
      }  //
        else
        {                                // teachOut falsch
           PlaySound(soundException);
           wait1Msec(100);
        ErrorCount+=1;


           if (sollOut!=Neuron0[0].out)
           {
          for (i=0; i<=nl0; i++)        // fuer alle i (Inputs)
              {                             // Wichtungen anpassen (Delta-Regel)
                 Neuron0[0].w[i] = Neuron0[0].w[i]+ (lbd*Neuron0[0].in[i]*(sollOut-Neuron0[0].out));
              }
           } //
           if (sollOut!=Neuron0[0].out)    // Schwelle  anpassen
           {
              Neuron0[0].th = Neuron0[0].th - (lbd*(sollOut-Neuron0[0].out));
           } //

      }  // else

 //...................................................
      sollOut=Neuron0[1].out;   // 0

      // for-Zaehler fuer Neuron-Nr ergibt Compiler-Error!
      // daher Code-Duplizierung fuer jedes Ausgabeneuron einzeln!
      // hier fuer Neuron0[1]

      printXY(0,23, "soll:");
       printXY(70,23,"%2.0f", sollOut);
      do                       // erzeugten Output berichtigen
      {
         key=getkey();
         if (key==1) {   if (sollOut>0) sollOut-=1;  }
         else
         if (key==3) { if (sollOut< 1) sollOut+=1;  }
        printXY(0,23, "soll:");
         printXY(70,23,"%2.0f", sollOut);
        wait1Msec(100);
      } while ((key!=2)&&(key!=4));

      println(5, " ");
      //...................................................
      if (key==4) {                     // Lern-Modus ENDE
         PlaySound(soundException);
         key=0;
         return;
      }  // if key
      //....................................................

                                       // Lern-Modus START

      if (sollOut==Neuron0[1].out)
        {
            PlaySound(soundBlip);         // teachOut korrekt
         PlaySound(soundBlip);
         wait1Msec(100);
      }  //
        else
        {                                // teachOut falsch
           PlaySound(soundException);
           wait1Msec(100);
        ErrorCount+=1;

           if (sollOut!=Neuron0[1].out)
           {
          for (i=0; i<=nl0; i++)       // fuer alle i (Inputs)
              {                            // Wichtungen anpassen (Delta-Regel)
                 Neuron0[1].w[i] = Neuron0[1].w[i]+ (Neuron0[1].in[i]*lbd*(sollOut-Neuron0[1].out));
              }
           } //
           if (sollOut!=Neuron0[1].out)   // Schwelle  anpassen
           {
              Neuron0[1].th = Neuron0[1].th - (lbd*(sollOut-Neuron0[1].out));
           } //

      }  // else

    // }  // for j

    }  // for o
   }  // for n
  }  // for m
 } while (ErrorCount>0);

PlaySound(soundUpwardTones);
PlaySound(soundUpwardTones);
}

//**********************************************************************
// Programmablauf-Steuerung, Menues
//**********************************************************************

int Menu_Recall() {
  eraseDisplay();
  MenuText="<Recall    Clear>";
  println(7, "%s", MenuText);
  println(0, "%s", " Hal "+Version);
  println(1, "%s", "----------------");
  println(2, "%s", "Reload my brain -");
  println(4, "%s", " Total Recall ?");
  do
  {
     key=getkey();
     if (key==1)    {  return 1;   }
     if (key==2)    {  PlaySound(soundException);   }
     if (key==3)    {  return 3;   }
     if (key==4)    {  PlaySound(soundException); }

     wait1Msec(100);
  }
  while ((key==0)||(key==2)||(key==4));
}



int Menu_LearnSaveRun() {
  eraseDisplay();
  MenuText="<Learn  Sav  Run>";
  do
  {
     key=getkey();
     if (key==1)    {  return 1;   }
     if (key==2)    {  SaveMemory(); }
     if (key==3)    {  return 3;   }
     if (key==4)    {  PlaySound(soundException); }

     wait1Msec(100);
  }
  while ((key==0)||(key==2)||(key==4));
}

//**********************************************************************
// Hauptprogramm
//**********************************************************************
int choice;


task main(){
  SensorType(S1)=sensorTouch;
  SensorType(S2)=sensorTouch;
  SensorType(S3)=sensorTouch;

  InitAllNeurons();
  InitThisNeuralNet();

  choice=Menu_Recall();
  if (choice==1)  { RecallMemory(); } // altes Gedaechtnis laden

  StartTask (DisplayValues);
  StartTask (RefreshLayers);

  while(true)
  {
    choice=Menu_LearnSaveRun();
    if (choice==1)
    {
       StopTask(RefreshInputLayer);
       LearnPerceptronRule();          // Lern-Modus
    }
    MenuText="Menue: [ESC]";
    PlaySound(soundFastUpwardTones);
    StartTask (RefreshInputLayer);    // Run-Modus
    do
    {
       key=getkey();
      wait1Msec(100);
    } while (key!=4);
  }

  Pause();
}

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Last edited by Ford Prefect on Mon Jun 02, 2008 5:27 pm, edited 3 times in total.



Mon Jun 02, 2008 4:27 pm
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
since installing the IFI version:
RobotC needed to be activated again !!

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Mon Jun 02, 2008 4:35 pm
Profile
Site Admin
Site Admin

Joined: Wed Jan 24, 2007 10:42 am
Posts: 613
Post 
Ford Prefect wrote:
since installing the IFI version:
RobotC needed to be activated again !!


This had nothing to do with installing the IFI version... they are completely separate from one another.

Whenever there is an increase in the version number (from 1.30 to 1.35) the license system gets rebuilt as well... this is useful because it gives people a fresh 30 day trial with the new version... the downside is that activated users will have to reactive.

You are not charged an extra activation when reactivating on the same computer :)

_________________
Timothy Friez
ROBOTC Developer - SW Engineer
tfriez@robotc.net


Mon Jun 02, 2008 5:19 pm
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
ok, thx.
but any presumption on the structure array index bug?

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Tue Jun 03, 2008 2:27 am
Profile
Site Admin
Site Admin

Joined: Wed Jan 24, 2007 10:42 am
Posts: 613
Post 
We're working on it :)

_________________
Timothy Friez
ROBOTC Developer - SW Engineer
tfriez@robotc.net


Tue Jun 03, 2008 11:45 am
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
fine, thx! :D

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Tue Jun 03, 2008 11:57 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 10 posts ] 

Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  



Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group.
Designed by ST Software for PTF.