View unanswered posts | View active topics It is currently Fri Jul 25, 2014 4:06 am






Reply to topic  [ 16 posts ]  Go to page 1, 2  Next
First Neural Net implemented on a Lego NXT (C like, RobotC) 
Author Message
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post First Neural Net implemented on a Lego NXT (C like, RobotC)
hi,
this is my first (simple) neural net implemented on a Lego NXT.
Hardware: 2 touch sensors (S1, S2)
=> 4 possible input patterns ( [0; 0], [1; 0], [0; 1], [1; 1])
=> 2 possible outputs ( 0 or 1)

the 2 touch sensors are "logically" plugged to the inputs, the neuron calculates the net value ("propagation") and then it's output ("activation").

this is the architecture of this digital neuron
(actually, this neuron example got 1 extra input =3 inputs all over; in my 1st implementation only 2 inputs are needed):
Image

at start up choose training mode (left button)
at the top of the screen 4 possible patterns are automatically generated and displayed step by step; [0; 0], [1; 0], [0; 1], [1; 1]

below this are shown:
- the calculated weights,
- then the calculated threshold and then
- the actual calculated (suggested) output is displayed,

- and below that you can enter the correct output that is to be teached (left button: increase, right button: decrease, orange button: ENTER).

if learning is correct and complete, the training mode stopps automatically;
to interrupt the training mode manually, press [ESC] (dark grey key).
then you can test the trained patterns by pressing the touch sensors.

Return to learning mode: again press [ESC] (dark grey key). By this you can retrain new patterns.

you may train e.g.:
a OR b
a AND b
a NOR b
a NAND b
a AND (NOT b)
and even more...

Code:
// Lernfaehiges Neuronales Netz aus 1 Neuron
// fuer Lego NXT, programmiert mit RobotC
// LTU = Linear Threshold Unit oder
// Adaline = Adaptive linear element
// (c) H. W. 2008
// version HaWe 0.2.3.beta


#define printXY nxtDisplayStringAt
#define println nxtDisplayTextLine

//**********************************************************************
// Basisdeklarationen fuer Neuronale Netze
// wird spaeter ausgegliedert als Neurons.h
//**********************************************************************


const int nl0 =  1;    // max. Neuronen in Schicht (Layer) 0
                       // ist bei einschichtigen Netzen = Ausgabeschicht
const int nl1 =  1;    // max. Neuronen in Schicht (Layer) 1
const int nl2 =  1;    // max. Neuronen in Schicht (Layer) 2
const int nl3 =  1;    // max. Neuronen in Schicht (Layer) 3


const int nd = 20;     // max. Dendriten-Eingaenge
float lbd = 0.3;       // Lern-Index / Faktor lambda

int key;               // gedrueckte NXT-Taste
string MenuText="";    // Menue-Steuerung

float sollOut=0;
// float  teachOut=0;



//**********************************************************************
// Neuron-Struktur (vereinfachte Version)
//**********************************************************************

typedef struct{
   float in[nd];    // Einzel-Inputs (Dendriten)
   float w[nd];     // Einzel-Wichtungen (jedes Dendriten)
   float net;       // totaler Input
   float th;        // Schwellenwert (threshold)
   float out;       // Output (Axon): z.B. 0 oder 1
} tNeuron;

//**********************************************************************

tNeuron Neuron0[nl0];  // Neuronen-Schicht 0
tNeuron Neuron1[nl1];  // Neuronen-Schicht 1
tNeuron Neuron2[nl2];  // Neuronen-Schicht 2
tNeuron Neuron3[nl3];  // Neuronen-Schicht 3


//**********************************************************************
//  mathematische Hilfsfunktionen
//**********************************************************************


float tanh(float x)  // Tangens hyperbolicus
{
   float e2x;
   e2x=exp(2*x);
   return((e2x-1)/(e2x+1));
}

//**********************************************************************
// Ein-/ Ausgabefunktionen (Tatstatur, Display)
//**********************************************************************

int buttonPressed(){

  TButtons nBtn;
  nNxtExitClicks=5; // gegen versehentliches Druecken

  nBtn = nNxtButtonPressed; // check for button press
   switch (nBtn)     {
      case kLeftButton: {
           return 1;   break;     }

        case kEnterButton: {
             return 2;   break;     }

        case kRightButton: {
             return 3;   break;     }

        case kExitButton: {
             return 4;   break;     }

        default: {
             return 0;   break;     }
   }
}

//*****************************************

int getkey() {
   int k, buf;

   k=buttonPressed();
   buf=buttonPressed();
  while (buf!=0)
  { buf=buttonPressed(); }
  return k;
}

//**********************************************************************

task DisplayValues(){

   while(true) {

    printXY( 0, 63, "IN:");
    printXY(32, 63, "%1.0f", Neuron0[0].in[0]);  printXY( 62,63, "%1.0f", Neuron0[0].in[1]);

    printXY( 0, 55, "w01"); printXY(20,55,"%4.2f %4.2f",Neuron0[0].w[0],Neuron0[0].w[1] );

    printXY( 0, 47, "th="); printXY(32, 47, "%4.2f", Neuron0[0].th);

    printXY( 0, 39, "OUT");  printXY(30,39,"%4.1f",Neuron0[0].out);

    // printXY( 0, 31, "tOut"); printXY(30,31,"%4.1f",teachOut);


    // Menue-Zeilen fuer Tastatur-Steuerung

    println(6, "%s", MenuText);
    if (key==1) {printXY( 0, 7, "<");}
    else
    if (key==2) {printXY( 42, 7, "ok");}
    else
    if (key==3) {printXY( 92, 7, ">");}
    else
    if (key==4) {printXY( 42, 7, "ESC");}
    else
    if (key==0) {printXY( 0, 7, "                    ");}

  }
  return;
}

//**********************************************************************

void Pause() {
   while(true) wait1Msec(50);
}

//**********************************************************************
// Funktionen des neuronalen Netzes
//**********************************************************************
//**********************************************************************
// Propagierungsfunktionen: Eingaenge gewichtet aufsummieren (in -> net)
//**********************************************************************

void netPropag(tNeuron &neur){      // Propagierungsfunktion 1
  int i=0;                          // kalkuliert den Gesamt-Input (net)
  float s=0;

  for(i=0;i<nd;i++){
     s+= (neur.in[i]*neur.w[i]);     // gewichtete Summe
  }
  neur.net=s;
}

void netPropagThr(tNeuron &neur){   // Propagierungsfunktion 2
  int i=0;                          // kalkuliert den Gesamt-Input (net)
  float s=0;                        // und beruecksichtigt Schwellwert

  for(i=0;i<nd;i++){
     s+= (neur.in[i]*neur.w[i]);     // gewichtete Summe
  }
  neur.net=s-neur.th;               // abzueglich Schwellwert
}

//**********************************************************************
// Aktivierungsfunktionen (net -> act -> out)
//**********************************************************************


void act_01(tNeuron &neur){         // Aktivierungsfunktion 1 T1: x -> [0; +1]
   if (neur.net>=0)                  // 0-1-Schwellwertfunktion
      {neur.out=1;}                  // Fkt.-Wert: 0 oder 1
   else {neur.out=0;}
}

void actIdent(tNeuron &neur){       // Aktivierungsfunktion 2 T2: x -> x
   neur.out=neur.net;                // Identitaets-Funktion
}                                   // Fkt-Wert: Identitaet


void actFermi(tNeuron &neur){       // Aktivierungsfunktion 3 T3: x -> [0; +1]
   float val;                        // Fermi-Fkt. (Logistisch, differenzierbar)
   float c=3.0;                      // c= Steilheit, bei c=1: flach,
  val= (1/(1+(exp(-c*neur.net))));  // c=10: Sprung zwischen x E [-0.1; +0.1]
  neur.out=val;
}

void actTanH(tNeuron &neur){        // Aktivierungsfunktion 4 T4: x -> [-1; +1]
   float val;                        // Tangens Hyperbolicus, differenzierbar
   float c=2.0;                      // c= Steilheit, bei c=1: flach
  val= tanh(c*neur.net);            // c=3: Sprung zwischen x E [-0.1; +0.1]
  neur.out=val;
}



//**********************************************************************
// Reset / Init
//**********************************************************************

void ResetNeuron(tNeuron &neur){ // alles auf Null
   int d;

   for (d=0; d<nd; d++) {
      neur.in[d]=0;      // Einzel-Input (Dendrit)
     neur.w[d]=0;       // Einzel-Wichtung (Dendrit)
   }
   neur.net=0;          // totaler Input
   neur.th=0;           // Schwellenwert (threshold)
   neur.out=0;          // errechneter Aktivierungswert=Output
   }

//*****************************************

void InitAllNeurons(){              // alle Netz-Neuronen auf Null
   int n;

  for (n=0; n<nl0; n++) {           // Neuronen-Schicht 0
        ResetNeuron(Neuron0[n]);}
  for (n=0; n<nl1; n++) {           // Neuronen-Schicht 1
        ResetNeuron(Neuron1[n]);}
  for (n=0; n<nl2; n++) {           // Neuronen-Schicht 2
        ResetNeuron(Neuron2[n]);}
  for (n=0; n<nl3; n++) {           // Neuronen-Schicht 3
        ResetNeuron(Neuron3[n]);}
}

//*****************************************


void InitThisNeuronalNet(){
  Neuron0[0].w[0]=0;             // Wichtung fuer Input 0
  Neuron0[0].w[1]=0;             // Wichtung fuer Input 1
  Neuron0[0].th  =0;             // Schwellenwert
}


//**********************************************************************
// Inputs
//**********************************************************************

task RefreshInputLayer(){  // Inputs sollen sehr schnell erfasst werden, daher als eigener Task
  while(true){
    Neuron0[0].in[0]=(float)SensorValue(0); // Input 0: Touch-Sensor an S1=0
    Neuron0[0].in[1]=(float)SensorValue(1); // Input 1: Touch-Sensor an S2=1
  }
  return;
}

//*****************************************

void SetInputPattern(int pattern)
{
   switch (pattern) {
      case 0: { Neuron0[0].in[0]=0; Neuron0[0].in[1]=0;  break;     }
      case 1: { Neuron0[0].in[0]=1; Neuron0[0].in[1]=0;  break;     }
      case 2: { Neuron0[0].in[0]=0; Neuron0[0].in[1]=1;  break;     }
      case 3: { Neuron0[0].in[0]=1; Neuron0[0].in[1]=1;  break;     }
  }
}

//**********************************************************************
// einzelne Neuronen schichtenweise durchrechnen
//**********************************************************************

task RefreshLayer(){
   int i;
  while(true){
    for (i=0;i<nl0;i++) {
       netPropagThr(Neuron0[i]);  // gewichtete Summe Layer 0 abzgl. Schwellwert
      act_01(Neuron0[i]);      // Aktivitaet per Fermi-Funktion
    }
  }
  return;
}

//**********************************************************************
// Lernverfahren
//**********************************************************************

void LearnPerceptronRule() {         // Lern-Modus nach Delta-Regel
  int m, i, o, ErrorCount;


ErrorCount=4;
while (ErrorCount>0) {
  PlaySound(soundBeepBeep);
  ErrorCount=4;

   for (m=0; m<=3; m++) {          // m: Muster-Nr.
     SetInputPattern(m);           // Muster praesentieren

    sollOut=Neuron0[0].out; // hier funktioniert der Compiler nicht korrekt ( BUG ! !)
    // daher ersatzweise:
    sollOut=0;


     MenuText="-- <<   ?  >> ++";  // Muster antrainieren
    do
    {
       printXY(0,23, "Soll-OUT: %5.2f", sollOut);

       key=getkey();
       if (key==1) {   if (sollOut>-1) sollOut-=1;  }
       else
       if (key==3) { if (sollOut< 1) sollOut+=1;  }
      printXY(0,23, "Soll-OUT: %5.2f", sollOut);
      wait1Msec(100);
    } while ((key!=2)&&(key!=4));

    println(5, " ");
    if (key==4) {                    // Lern-Modus beenden
       PlaySound(soundException);
       key=0;
       return;
    }  // if key
    if (sollOut==Neuron0[0].out)
      {
          PlaySound(soundBlip);         // teachOut korrekt
       PlaySound(soundBlip);
       wait1Msec(100);
       ErrorCount-=1;
    }  // if sollOut
      else
      {                                // teachOut falsch
         PlaySound(soundDownwardTones);
         wait1Msec(100);

         if (sollOut<Neuron0[0].out)        // Schwelle  anpassen
         {
            Neuron0[0].th=Neuron0[0].th - (lbd*(sollOut-Neuron0[0].out));
         } // if sollOut
         if (sollOut!=Neuron0[0].out)       // Wichtungen anpassen
         {
        for (i=0; i<=1; i++)           // i: Inputs (=2)
            {
               Neuron0[0].w[i]=Neuron0[0].w[i]+ (lbd*Neuron0[0].in[i]*(sollOut-Neuron0[0].out));
            } // for i
         } // if sollOut
         if (sollOut>Neuron0[0].out)        // Schwelle  anpassen
         {
            Neuron0[0].th=Neuron0[0].th - (lbd*(sollOut-Neuron0[0].out));
         } // if sollOut

    }  // else

  }  // for m
} // while EroorCount

PlaySound(soundUpwardTones);
PlaySound(soundUpwardTones);
}








//**********************************************************************
// Programmablauf-Steuerung, Menues
//**********************************************************************

void MenuLearnOrRun() {

  eraseDisplay();
   MenuText="< learn |  run >";
  do
  {
     StopTask(RefreshInputLayer);

     MenuText="< learn |  run >";
     key=getkey();
     wait1Msec(100);
     if (key==1)
     {  LearnPerceptronRule();   }
    if (key==2)
     {  PlaySound(soundException);    }
    if (key==4)
     {  PlaySound(soundException);    }
  }
  while ((key==0)||(key==2)||(key==4));
  wait1Msec(100);

}

//**********************************************************************
// Hauptprogramm
//**********************************************************************

task main(){
  SensorType(S1)=sensorTouch;
  SensorType(S2)=sensorTouch;

  InitAllNeurons();
  InitThisNeuronalNet();

  StartTask (DisplayValues);
  StartTask (RefreshLayer);

  while(true)
  {
    MenuLearnOrRun();
    PlaySound(soundFastUpwardTones);
    MenuText="";
    key=0;
    StartTask (RefreshInputLayer);
    do
    {
       MenuText="Training: [ESC]";
       key=getkey();
      wait1Msec(100);
    } while (key!=4);
  }


  Pause();
}


_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Last edited by Ford Prefect on Sun Aug 17, 2008 5:33 am, edited 10 times in total.



Thu May 15, 2008 2:01 pm
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
...and the following is a feed forward net with
3 inputs (Touch-Sensors)
and
2 outputs
Image
the 3 inputs are plugged to both of 2 neurons, and each of these neurons can be trained to a different output behavior.

so over all, 8 different input patterns (0,0,0) (0,0,1), (0,1,0), (0,1,1), (1,0,0)...
can be projected to 4 different ouput patterns , e.g.
[0,0]: 1 Motor stop,
[0,1]: 1 Motor slowly forward,
[1,0]: 1 Motor reverse.
[1,1]: 1 Motor fast forward,

or:
[0,0]: 2 Motors stop,
[0,1]: 2 Motors forward,
[1,0]: 2 Motors reverse.
[1,1]: 1 Motor forw, 1 Motor reverse (turn)

EDIT: this version works fine with RobotC 1.38 Beta 1


Code:
// Lernfaehiges Neuronales Netz
// fuer Lego NXT
// ab RobotC 1.38 Beta 1 !
//
// Feed-Forward Netz mit 3 Sensor-Eingaengen (Touch an S1, S2, S3)
// und 2 Ausgabe-Neurons (Anzeige auf dem Display)
// (c) H. W. 2008
// neu: Vorbereitungen fuer mehrschichtige Netze & Backpropagation
   string Version="0.414";

#define printXY nxtDisplayStringAt
#define println nxtDisplayTextLine


//**********************************************************************
// Basisdeklarationen fuer Neuronale Netze
//**********************************************************************


const int nl0 =  2;    // max. Neurons in Schicht (Layer) 0
const int nl1 =  1;    // max. Neurons in Schicht (Layer) 1
const int nl2 =  1;    // max. Neurons in Schicht (Layer) 2
const int nl3 =  1;    // max. Neurons in Schicht (Layer) 3


const int ni = 3;      // max. Dendriten-Eingaenge (Zaehler ab 0)
float lbd    = 0.2;    // Lern-Index / Faktor lambda

int key;               // gedrueckte NXT-Taste
string MenuText="";    // Menue-Steuerung

float sollOut=0;


//**********************************************************************
// Neuron-Struktur (vereinfachte Version)
//**********************************************************************

typedef struct{
   float in[ni];    // Einzel-Inputs (Dendriten)
   float w[ni];     // Einzel-Wichtungen (jedes Dendriten)
   float net;       // totaler Input
   float th;        // Schwellenwert (threshold)
   float d;         // delta=Fehlersignal
   float out;       // Output (Axon): z.B. 0 oder 1
} tNeuron;

//**********************************************************************

tNeuron Neuron0[nl0];  // Neurons-Schicht 0
tNeuron Neuron1[nl1];  // Neurons-Schicht 1
tNeuron Neuron2[nl2];  // Neurons-Schicht 2
tNeuron Neuron3[nl3];  // Neurons-Schicht 3


//**********************************************************************
//  mathematische Hilfsfunktionen
//**********************************************************************


float tanh(float x)  // Tangens hyperbolicus
{
   float e2x;
   e2x=exp(2*x);
   return((e2x-1)/(e2x+1));
}

//**********************************************************************
// Ein-/ Ausgabefunktionen (Tatstatur, Display)
//**********************************************************************

int buttonPressed(){

  TButtons nBtn;
  nNxtExitClicks=4; // gegen versehentliches Druecken

  nBtn = nNxtButtonPressed; // check for button press
   switch (nBtn)     {
      case kLeftButton: {
           return 1;   break;     }

        case kEnterButton: {
             return 2;   break;     }

        case kRightButton: {
             return 3;   break;     }

        case kExitButton: {
             return 4;   break;     }

        default: {
             return 0;   break;     }
   }
   return 0;
}

//*****************************************

int getkey() {
   int k, buf;

   k=buttonPressed();
   buf=buttonPressed();
  while (buf!=0)
  { buf=buttonPressed(); }
  return k;
}

//**********************************************************************

task DisplayValues(){
  int i;  // inputs = sensors
  int j;  // neuron number = outputs
   while(true) {

    printXY( 0, 63, "IN:");
                             printXY(48, 55, "|");
                             printXY(48, 47, "|");
    printXY( 0, 39, "th=");  printXY(48, 39, "|");
    printXY( 0, 31, "OUT");  printXY(48, 31, "|");




     for (j=0;j<nl0;j++) {
         printXY(15, 63, "%2.0f", Neuron0[j].in[0]);
         printXY(26, 63, "%2.0f", Neuron0[j].in[1]);
         printXY(37, 63, "%2.0f", Neuron0[j].in[2]);

         printXY(00+(j*53), 55, "%3.1f", Neuron0[j].w[0]);
         printXY(12+(j*53), 47, "%3.1f", Neuron0[j].w[1]);
         printXY(24+(j*53), 55, "%3.1f", Neuron0[j].w[2]);

         printXY(25+(j*45), 39, "%3.1f", Neuron0[j].th);

         printXY(25+(j*45), 31, "%2.0f", Neuron0[j].out);
    }

    // Menue-Zeilen fuer Tastatur-Steuerung

    println(7, "%s", MenuText);


  }
  return;
}

//**********************************************************************

void Pause() {
   while(true) wait1Msec(50);
}


//**********************************************************************
// File I/O
//**********************************************************************
const string sFileName = "Memory.dat";

TFileIOResult nIoResult;
TFileHandle   fHandle;

int   nFileSize     = (nl0+nl1+nl2+nl3+1)*100;


void SaveMemory()
{
   int i, j;

   CloseAllHandles(nIoResult);
   wait1Msec(500);
   PlaySound(soundBeepBeep);
   wait1Msec(11);

   Delete(sFileName, nIoResult);

  OpenWrite(fHandle, nIoResult, sFileName, nFileSize);
  if (nIoResult==0) {
    eraseDisplay();

    for (j=0;j<nl0;j++)   {
      for (i=0; i<ni;i++)
      {   WriteFloat (fHandle, nIoResult, Neuron0[j].w[i]); }
        WriteFloat (fHandle, nIoResult, Neuron0[j].th);     }


    Close(fHandle, nIoResult);
    if (nIoResult==0) PlaySound(soundUpwardTones);
    else PlaySound(soundException);
  }
  else PlaySound(soundDownwardTones);

}

//*****************************************

void RecallMemory()
{
  int i, j;
   CloseAllHandles(nIoResult);
   wait1Msec(500);
   PlaySound(soundBeepBeep);
   wait1Msec(11);

   OpenRead(fHandle, nIoResult, sFileName, nFileSize);
  if (nIoResult==0) {

    j=0;
    for (j=0;j<nl0;j++) {
      for (i=0; i<ni;i++)
      { ReadFloat (fHandle, nIoResult, Neuron0[j].w[i]); }
        ReadFloat (fHandle, nIoResult, Neuron0[j].th);     }

    Close(fHandle, nIoResult);
    if (nIoResult==0) PlaySound(soundUpwardTones);
    else PlaySound(soundException);
  }
  else PlaySound(soundDownwardTones);
  eraseDisplay();

}


//**********************************************************************
// Funktionen des neuronalen Netzes
//**********************************************************************
//**********************************************************************
// Propagierungsfunktionen: Eingaenge gewichtet aufsummieren (in -> net)
//**********************************************************************

void netPropag(tNeuron &neur){      // Propagierungsfunktion 1
  int i=0;                          // kalkuliert den Gesamt-Input (net)
  float s=0;

  for(i=0;i<ni;i++){
     s+= (neur.in[i]*neur.w[i]);     // gewichtete Summe
  }
  neur.net=s;
}

void netPropagThr(tNeuron &neur){   // Propagierungsfunktion 2
  int i=0;                          // kalkuliert den Gesamt-Input (net)
  float s=0;                        // und beruecksichtigt Schwellwert

  for(i=0;i<ni;i++){
     s+= (neur.in[i]*neur.w[i]);     // gewichtete Summe
  }
  neur.net=s-neur.th;               // abzueglich Schwellwert
}

//**********************************************************************
// Aktivierungsfunktionen inkl. Ausgabe (net -> act -> out)
//**********************************************************************


void act_01(tNeuron &neur){         // Aktivierungsfunktion 1 T1: x -> [0; +1]
   if (neur.net>=0)                  // 0-1-Schwellwertfunktion
      {neur.out=1;}                  // Fkt.-Wert: 0 oder 1
   else {neur.out=0;}
}

void actIdent(tNeuron &neur){       // Aktivierungsfunktion 2 T2: x -> x
   neur.out=neur.net;                // Identitaets-Funktion
}                                   // Fkt-Wert: Identitaet


void actFermi(tNeuron &neur){       // Aktivierungsfunktion 3 T3: x -> [0; +1]
   float val;                        // Fermi-Fkt. (Logistisch, differenzierbar)
   float c=3.0;                      // c= Steilheit, bei c=1: flach,
  val= (1/(1+(exp(-c*neur.net))));  // c=10: Sprung zwischen x E [-0.1; +0.1]
  neur.out=val;
}

void actTanH(tNeuron &neur){        // Aktivierungsfunktion 4 T4: x -> [-1; +1]
   float val;                        // Tangens Hyperbolicus, differenzierbar
   float c=2.0;                      // c= Steilheit, bei c=1: flach
  val= tanh(c*neur.net);            // c=3: Sprung zwischen x E [-0.1; +0.1]
  neur.out=val;
}



//**********************************************************************
// Reset / Init
//**********************************************************************

void ResetNeuron(tNeuron &neur){ // alles auf Null
   int i;

   for (i=0; i<ni; i++) {
      neur.in[i]=0;      // Einzel-Input (Dendrit)
     neur.w[i]=0;       // Einzel-Wichtung (Dendrit)
   }
   neur.net=0;          // totaler Input
   neur.th=0;           // Schwellenwert (threshold)
   neur.out=0;          // errechneter Aktivierungswert=Output
   }

//*****************************************

void InitAllNeurons(){              // alle Netz-Neurons auf Null
   int j;

  for (j=0; j<nl0; j++) {           // Neuron-Schicht 0
        ResetNeuron(Neuron0[j]);}

  for (j=0; j<nl1; j++) {           // Neuron-Schicht 1
        ResetNeuron(Neuron1[j]);}

  for (j=0; j<nl2; j++) {           // Neuron-Schicht 2
        ResetNeuron(Neuron2[j]);}

  for (j=0; j<nl3; j++) {           // Neuron-Schicht 3
        ResetNeuron(Neuron3[j]);}
}

//*****************************************


void InitThisNeuralNet()  // for testing
{
  ; // defaults
}


void PrepThisNeuralNet()  // for testing
{
   ; // defaults
}


//**********************************************************************
// Inputs
//**********************************************************************

task RefreshInputLayer(){  // Inputs sollen sehr schnell erfasst werden, daher als eigener Task
int i, j;
  while(true){
  for (j=0; j<nl0; j++) {
    for (i=0; i<ni; i++)   {
      Neuron0[j].in[i]=(float)SensorValue(i); // Input 0: Touch-Sensor an S1=0
      }
    }
  }
  return;
}

//*****************************************

void SetInputPattern(int m, int n, int o)
{
   int j;
   for (j=0; j<nl0;j++)
  {
     Neuron0[j].in[0]=(float)m;
     Neuron0[j].in[1]=(float)n;
     Neuron0[j].in[2]=(float)o;
   }
}

//**********************************************************************
// einzelne Neurons schichtenweise durchrechnen
//**********************************************************************

task RefreshLayers(){
  int j;
  while(true){
    for (j=0;j<nl0;j++) {
       netPropagThr(Neuron0[j]);  // gewichtete Summe Layer 0 abzgl. Schwellwert
      act_01(Neuron0[j]);        // Aktivitaet per 0-1-Schwellwert-Funktion
    }
  }
  return;
}

//**********************************************************************
// Lernverfahren
//**********************************************************************


void LearnPerceptronRule() {         // Perceptron-Lern-Modus
  int ErrorCount;
  int m,n,o;  // Sensor-Kombinationen
  int i;  // Inputs

  int j;  // Anzahl Ausgabe-Neurons

 do {
  ErrorCount=0;
  PlaySound(soundBeepBeep);
  MenuText="- <<  ok  >> ++";

  for (m=0; m<2; m++)    {
    for (n=0; n<2; n++)   {
     for (o=0; o<2; o++)   {
     SetInputPattern(m,n,o);           // virtuelles Muster praesentieren
     wait1Msec(200);

     for (j=0;j<2;j++)
     {

       sollOut=Neuron0[j].out;
       MenuText="- <<  ok  >> ++";
       printXY(0,23, "soll:");
       printXY(25+(j*45),23,"%2.0f", sollOut);
      do                        // erzeugten Output berichtigen
      {
         key=getkey();

         if (key==1) {   if (sollOut>0) sollOut-=1;  }
         else
         if (key==3) { if (sollOut< 1) sollOut+=1;  }
        printXY(0,23, "soll:");
         printXY(25+(j*45),23,"%2.0f", sollOut);
        wait1Msec(100);
      } while ((key!=2)&&(key!=4));

      println(5, " ");

      //...................................................
      if (key==4) {                     // Lern-Modus ENDE
         PlaySound(soundException);
         key=0;
         return;
      }  // if key
      //....................................................

                                        // Lern-Modus START
      //....................................................
      if (sollOut==Neuron0[j].out    )  // teachOut korrekt
        {
            PlaySound(soundBlip);
         PlaySound(soundBlip);
         wait1Msec(100);
      }  //


      //.....................................................
      if (sollOut!=Neuron0[j].out)    // teachOut falsch
        {
           PlaySound(soundException);
           wait1Msec(100);
        ErrorCount+=1;
        //.....................................................
                                      // LERNEN

        for (i=0; i<=nl0; i++)        // fuer alle i (Inputs)
          {                             // Wichtungen anpassen (Delta-Regel)
              Neuron0[j].w[i] = Neuron0[j].w[i]+ (lbd*Neuron0[j].in[i]*(sollOut-Neuron0[j].out));
          }

           if (sollOut!=Neuron0[j].out)  // falls noetig: Schwelle anpassen (Delta-Regel, erweitert)
           {
              Neuron0[j].th = Neuron0[j].th - (lbd*(sollOut-Neuron0[j].out));
           }
      //.....................................................
      }  // else

     }  // for j

    }  // for o
   }  // for n
  }  // for m
 } while (ErrorCount>0);

PlaySound(soundUpwardTones);
PlaySound(soundUpwardTones);
}

//**********************************************************************
// Programmablauf-Steuerung, Menues
//**********************************************************************

int Menu_Recall() {
  eraseDisplay();
  MenuText="<Recall    Clear>";
  println(7, "%s", MenuText);
  println(0, "%s", " Hal "+Version);
  println(1, "%s", "-");
  println(2, "%s", "Reload my brain -");
  println(4, "%s", " Total Recall ?");
  do
  {
     key=getkey();
     if (key==1)    {  return 1;   }
     if (key==2)    {  PlaySound(soundException);   }
     if (key==3)    {  return 3;   }
     if (key==4)    {  PlaySound(soundException); }

     wait1Msec(100);
  }
  while ((key==0)||(key==2)||(key==4));
}



int Menu_LearnSaveRun() {
  eraseDisplay();
  MenuText="<Learn  Sav  Run>";
  do
  {
     key=getkey();
     if (key==1)    {  return 1;   }
     if (key==2)    {  SaveMemory(); }
     if (key==3)    {  return 3;   }
     if (key==4)    {  PlaySound(soundException); }

     wait1Msec(100);
  }
  while ((key==0)||(key==2)||(key==4));
}

//**********************************************************************
// Hauptprogramm
//**********************************************************************
int choice;


task main(){
  SensorType(S1)=sensorTouch;
  SensorType(S2)=sensorTouch;
  SensorType(S3)=sensorTouch;

  nVolume=2;
  InitAllNeurons();
  PrepThisNeuralNet();

  choice=Menu_Recall();
  if (choice==1)  { RecallMemory(); } // altes Gedaechtnis laden

  StartTask (DisplayValues);
  StartTask (RefreshInputLayer);
  StartTask (RefreshLayers);

  while(true)
  {
    choice=Menu_LearnSaveRun();
    if (choice==1)
    {
       StopTask(RefreshInputLayer);
       LearnPerceptronRule();          // Lern-Modus
    }

    PlaySound(soundFastUpwardTones);
    StartTask (RefreshInputLayer);    // Run-Modus

  }

}



Last edited by Ford Prefect on Fri Aug 08, 2008 11:01 am, edited 13 times in total.



Sun May 18, 2008 5:19 am
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
... the NEXT STEP:

a neural double layer backpropagation net with
3 inputs (Touch Sensors),
3 hidden neurons,
2 output neurons
with 1 output each = 2 outputs:

(because of NAN errors, RobotC still needs to be improved !! :poke: )
Image

Anm.: Die Größe des Netzes ist im Prinzip frei skalierbar - man braucht im Deklarationsteil nur anzugeben
- Anzahl der Inputs (ni)
- Anzahl der Verdeckten Neurons (L0)
- Anzahl der Ausgabe-Neurons (gleich Anzahl der Outputs) (L1),
schon lässt sich z.B. ein 25*10*5 (ni * L0* L1)-Netz in Betrieb nehmen!


Code:
// Lernfaehiges 2-schichtiges Neuronales Netz
// Backpropagation Netz mit 3 Sensor-Eingaengen (Touch an S1, S2, S3)
// an 3 verdeckten Neurons
// dann 2 Ausgabe-Neurons mit 2 Outputs (Anzeige auf dem Display)
// (c) H. W. 2008
// neu: automatisches Training nach Datei
   string Version="0.479";


#define printXY nxtDisplayStringAt
#define println nxtDisplayTextLine


//**********************************************************************
// Basisdeklarationen fuer Neuronale Netze
//**********************************************************************


const int L0 =  3;    // max. Neurons in Schicht (Layer) 0 (Schicht fuer Inputs)
const int L1 =  2;    // max. Neurons in Schicht (Layer) 1 (Ausgabe Schicht bei 2-schichtigen)
const int L2 =  1;    // max. Neurons in Schicht (Layer) 2
const int L3 =  1;    // max. Neurons in Schicht (Layer) 3

const int ni = 3;      // max. Dendriten-Eingaenge (Zaehler ab 0)



//**********************************************************************
// Neuron-Struktur
//**********************************************************************

float lf     = 0.7;                  // Lern-Faktor
int CalcTime = (L0+L1+L2+L3)*10;     // Network CalcTime

float sollOut;

typedef struct{
   float in[ni];    // Einzel-Inputs (Dendriten)
   float w[ni];     // Einzel-Wichtungen (jedes Dendriten)
   float net;       // totaler Input
   float th;        // Schwellenwert (threshold)
   float d;         // delta=Fehlersignal
   float out;       // Output (Axon): z.B. 0 oder 1
} tNeuron;

//**********************************************************************

tNeuron Neuron0[L0];  // Neurons-Schicht 0  (Schicht fuer Inputs)
tNeuron Neuron1[L1];  // Neurons-Schicht 1  (Ausgabe Schicht bei 2-schichtigen)
tNeuron Neuron2[L2];  // Neurons-Schicht 2
tNeuron Neuron3[L3];  // Neurons-Schicht 3


//**********************************************************************
//  mathematische Hilfsfunktionen
//**********************************************************************


float tanh(float x)  // Tangens hyperbolicus
{
   float e2x;
   e2x=exp(2*x);
   return((e2x-1)/(e2x+1));
}

//**********************************************************************
// Ein-/ Ausgabefunktionen (Tatstatur, Display)
//**********************************************************************

int key;               // gedrueckte NXT-Taste
int buttonPressed(){

  TButtons nBtn;
  nNxtExitClicks=100; // gegen versehentliches Druecken

  nBtn = nNxtButtonPressed; // check for button press
   switch (nBtn)     {
      case kLeftButton: {
           return 1;   break;     }

        case kEnterButton: {
             return 2;   break;     }

        case kRightButton: {
             return 3;   break;     }

        case kExitButton: {
             return 4;   break;     }

        default: {
             return 0;   break;     }
   }
   return 0;
}

//*****************************************

int getkey() {
   int k, buf;

   k=buttonPressed();
   buf=buttonPressed();
  while (buf!=0)
  { buf=buttonPressed();
    wait1Msec(20);}
  return k;
}

void pause() {

   int k=0;

   do {k=getkey();}
   while (k==0);

}



//**********************************************************************
string MenuText="";    // Menue-Steuerung

task DisplayValues(){
  int i;  // input number  = Sensoren an verdeckter Schicht
  int j;  // neuron number = Outputs an Ausgabe-Schicht

   while(true) {

    printXY(00,    63, "%4.1f", Neuron0[0].w[0]);  // Neuron L0 [0]
      printXY(26,    63, "%4.1f", Neuron0[0].w[1]);  // Eingabe-Schicht
      printXY(52,    63, "%4.1f", Neuron0[0].w[2]);
    printXY(78,    63, "%4.1f", Neuron0[0].th);

    printXY(00,    55, "%4.1f", Neuron0[1].w[0]);  // Neuron L0 [1]
      printXY(26,    55, "%4.1f", Neuron0[1].w[1]);  // Eingabe-Schicht
      printXY(52,    55, "%4.1f", Neuron0[1].w[2]);
    printXY(78,    55, "%4.1f", Neuron0[1].th);

    printXY(00,    47, "%4.1f", Neuron0[2].w[0]);  // Neuron L0 [2]
      printXY(26,    47, "%4.1f", Neuron0[2].w[1]);  // Eingabe-Schicht
      printXY(52,    47, "%4.1f", Neuron0[2].w[2]);
    printXY(78,    47, "%4.1f", Neuron0[2].th);

    printXY(00,    39, "%4.1f", Neuron1[0].w[0]);  // Neuron L1 [0]
      printXY(26,    39, "%4.1f", Neuron1[0].w[1]);  // Ausgabe-Schicht
      printXY(52,    39, "%4.1f", Neuron1[0].w[2]);
    printXY(78,    39, "%4.1f", Neuron1[0].th);

    printXY(00,    31, "%4.1f", Neuron1[1].w[0]);  // Neuron L1 [1]
      printXY(26,    31, "%4.1f", Neuron1[1].w[1]);  // Ausgabe-Schicht
      printXY(52,    31, "%4.1f", Neuron1[1].w[2]);
    printXY(78,    31, "%4.1f", Neuron1[1].th);

    printXY(00,    23, "%2.0f", Neuron0[0].in[0]); // inputs (3)
    printXY(16,    23, "%2.0f", Neuron0[0].in[1]);
    printXY(32,    23, "%2.0f", Neuron0[0].in[2]);

    printXY(50,    23, "%4.1f", Neuron1[0].out);   // outputs (2)
    printXY(76,    23, "%4.1f", Neuron1[1].out);


    println(7, "%s", MenuText);                    //  Menue-Zeile fuer Tastatur-Steuerung

  }
  return;
}

//**********************************************************************



//**********************************************************************
// File I/O
//**********************************************************************
const string sFileName = "Memory.dat";

TFileIOResult nIoResult;
TFileHandle   fHandle;

int   nFileSize     = (L0 + L1 + L2 + L3 +1)*100;


void SaveMemory()
{
   int i, j;

   CloseAllHandles(nIoResult);
   println(6,"%s","Save Memory...");
   wait1Msec(500);
   PlaySound(soundBeepBeep);
   wait1Msec(11);

   Delete(sFileName, nIoResult);

  OpenWrite(fHandle, nIoResult, sFileName, nFileSize);
  if (nIoResult==0) {
    eraseDisplay();

    for (j=0;j<L0;j++)   {
      for (i=0; i<ni;i++)
      {   WriteFloat (fHandle, nIoResult, Neuron0[j].w[i]); }
        WriteFloat (fHandle, nIoResult, Neuron0[j].th);     }

    for (j=0;j<L1;j++)   {
      for (i=0; i<ni;i++)
      {   WriteFloat (fHandle, nIoResult, Neuron1[j].w[i]); }
        WriteFloat (fHandle, nIoResult, Neuron1[j].th);     }


    Close(fHandle, nIoResult);
    if (nIoResult==0) {
       PlaySound(soundUpwardTones);
       println(6,"%s","Save Memory: OK"); }
    else {
       PlaySound(soundException);
       println(6,"%s","Save Memory: ERROR"); }
  }
  else PlaySound(soundDownwardTones);

}

//*****************************************

void RecallMemory()
{
  int i, j;
   println(6,"%s","Recall Memory");
  CloseAllHandles(nIoResult);
   wait1Msec(500);
   PlaySound(soundBeepBeep);
   wait1Msec(11);

   OpenRead(fHandle, nIoResult, sFileName, nFileSize);
  if (nIoResult==0) {

  for (j=0;j<L0;j++) {
     for (i=0; i<ni;i++)
     { ReadFloat (fHandle, nIoResult, Neuron0[j].w[i]); }
       ReadFloat (fHandle, nIoResult, Neuron0[j].th);     }

  for (j=0;j<L1;j++) {
     for (i=0; i<ni;i++)
     { ReadFloat (fHandle, nIoResult, Neuron1[j].w[i]); }
       ReadFloat (fHandle, nIoResult, Neuron1[j].th);     }


    Close(fHandle, nIoResult);
    if (nIoResult==0) PlaySound(soundUpwardTones);
    else {
       PlaySound(soundException);
       println(6,"%s","Recall: ERROR"); }
  }
  else PlaySound(soundDownwardTones);
  eraseDisplay();

}


//**********************************************************************
// Funktionen des neuronalen Netzes
//**********************************************************************

//**********************************************************************
// Inputs
//**********************************************************************

task RefreshInputLayer(){  // Inputs aus Sensorwerten
int i, j;
  while(true){
  for (j=0; j<L0; j++) {   // alle Inputs an alle Eingangs-Neuronen
    for (i=0; i<ni; i++)   {
      Neuron0[j].in[i]=(float)SensorValue(i);
      }
    }
  }
  return;
}

//*****************************************



void SetInputPattern(int i) // Inputs virtuell generiert
{
   int j, n;

   printXY(80, 63, "%d", i);
   for (j=0; j<L0;j++)
  {

     for (n = 0; n <=ni-1; n++)
    {
      Neuron0[j].in[n]= i & 1;
      i >>= 1;
    }
  }
}





//**********************************************************************
// Propagierungsfunktionen: Eingaenge gewichtet aufsummieren (in -> net)
//**********************************************************************

void netPropag(tNeuron &neur){      // Propagierungsfunktion 1
  int i=0;                          // kalkuliert den Gesamt-Input (net)
  float s=0;

  for(i=0;i<ni;i++){
     s+= (neur.in[i]*neur.w[i]);     // gewichtete Summe
  }
  neur.net=s;
}

void netPropagThr(tNeuron &neur){   // Propagierungsfunktion 2
  int i=0;                          // kalkuliert den Gesamt-Input (net)
  float s=0;                        // und beruecksichtigt Schwellwert

  for(i=0;i<ni;i++){
     s+= (neur.in[i]*neur.w[i]);     // gewichtete Summe
  }
  neur.net=s-neur.th;               // abzueglich Schwellwert
}

//**********************************************************************
// Aktivierungsfunktionen inkl. Ausgabe (net -> act -> out)
//**********************************************************************


void act_01(tNeuron &neur){         // Aktivierungsfunktion 1 T1: x -> [0; +1]
   if (neur.net>=0)                  // 0-1-Schwellwertfunktion
      {neur.out=1;}                  // Fkt.-Wert: 0 oder 1
   else {neur.out=0;}
}

void actIdent(tNeuron &neur){       // Aktivierungsfunktion 2 T2: x -> x
   neur.out=neur.net;                // Identitaets-Funktion
}                                   // Fkt-Wert: Identitaet


void actFermi(tNeuron &neur){       // Aktivierungsfunktion 3 T3: x -> [0; +1]
  float val;                        // Fermi-Fkt. (Logistisch, differenzierbar)
  float c=3.0;                      // c= Steilheit, bei c=1: flach,
                                    // c=10: Sprung zwischen x E [-0.1; +0.1]

  val= (1/(1+(exp(-c*neur.net))));
  neur.out=val;
}

void actTanH(tNeuron &neur){        // Aktivierungsfunktion 4 T4: x -> [-1; +1]
   float val;                       // Tangens Hyperbolicus, differenzierbar
   float c=2.0;                     // c= Steilheit, bei c=1: flach
  val= tanh(c*neur.net);            // c=3: Sprung zwischen x E [-0.1; +0.1]
  neur.out=val;
}



//**********************************************************************
// Reset / Init
//**********************************************************************

void ResetNeuron(tNeuron &neur, int rand){ // alles auf Null bzw. randomisiert
   int i;



   for (i=0; i<ni; i++) {
      neur.in[i]=0;                   // Einzel-Input (Dendrit)
     if (rand==0) {neur.w[i]=0;}     // Einzel-Wichtung (Dendrit)=0
      else
      neur.w[i]=-1.0+random(10)*0.2;  // Einzel-Wichtung (Dendrit) randomomisiert

   }
   neur.net=0;                       // totaler Input
   if (rand==0) {neur.th=0;}         // Schwellenwert (threshold)=0
   else
   neur.th=-1.0 + random(10)*0.2;    // Schwellenwert (threshold) randomisiert
   neur.out=0;                       // errechneter Aktivierungswert=Output
   }

//*****************************************

void InitAllNeurons(){             // alle Netz-Neurons resetten
   int j;                           // (0 oder randomisiert)


   for (j=0; j<L0; j++) {           // Neuron-Schicht 0
        ResetNeuron(Neuron0[j],1);}

  for (j=0; j<L1; j++) {           // Neuron-Schicht 1
        ResetNeuron(Neuron1[j],1);}

  for (j=0; j<L2; j++) {           // Neuron-Schicht 2
        ResetNeuron(Neuron2[j],0);}

  for (j=0; j<L3; j++) {           // Neuron-Schicht 3
        ResetNeuron(Neuron3[j],0);}
}

//*****************************************

void PrepThisNeuralNet()  // for testing
{
   ; // defaults
}


//**********************************************************************
// einzelne Neurons schichtenweise durchrechnen
//**********************************************************************

task RefreshLayers(){
  int j, k;

  ClearTimer(0);
  while(true){


     for (j=0;j<L0;j++) {
       netPropagThr(Neuron0[j]);    // net-Input Layer 0
      actFermi(Neuron0[j]);        // Aktivierung T: Fermi-Funktion -> out
      for (k=0;k<L1;k++) {
        Neuron1[k].in[j] = Neuron0[j].out; } // Synapse Neuron0->Neuron1
    }

    for (j=0;j<L1;j++) {
      netPropagThr(Neuron1[j]);    // net-Input Layer 1
      actFermi(Neuron1[j]);        // Aktivierung T: Fermi-Funktion -> out
    }

  }
  return;
}

//**********************************************************************
// Lernverfahren
//**********************************************************************



//**********************************************************************
void LearnPerceptronRule() {         // Perceptron-Lern-Modus
  int ErrorCount;
  int in;        // Sensor-Kombinationen
  int i;         // Anzahl Inputs
  int j;         // Anzahl Ausgabe-Neurons

 do
 {
  ErrorCount=0;
  PlaySound(soundBeepBeep);
  MenuText="-- <<  ok  >> ++";


   SetInputPattern(in);           // virtuelles Muster praesentieren
   wait1Msec(CalcTime);

   for (j=0;j<2;j++)  // 1 bis Anzahl Ausgabe-Neuron
   {

       sollOut=0;
       MenuText="-- <<  ok  >> ++";
       printXY(0,15, "soll:");
       printXY(48+(j*25),15,"%2.0f", sollOut);
      do                        // erzeugten Output berichtigen
      {
         key=getkey();

         if (key==1) {   if (sollOut>0) sollOut-=1;  }
         else
         if (key==3) { if (sollOut< 1) sollOut+=1;  }
        printXY(0,15, "soll:");
         printXY(48+(j*25),15,"%2.0f", sollOut);
        wait1Msec(100);
      } while ((key!=2)&&(key!=4));

      println(5, " ");

      //...................................................
      if (key==4) {                     // Lern-Modus ENDE
         PlaySound(soundException);
         key=0;
         return;
      }
      //....................................................

                                        // Lern-Modus START
      //....................................................
      if (sollOut==Neuron0[j].out    )  // teachOut korrekt
        {
            PlaySound(soundBlip);
         PlaySound(soundBlip);
         wait1Msec(100);
      }
        //....................................................
      if (sollOut!=Neuron0[j].out)      // teachOut falsch
        {
           PlaySound(soundException);
           wait1Msec(100);
        ErrorCount+=1;
           //...................................................
                                        // LERNEN

        for (i=0; i<=L0; i++)           // fuer alle i (Inputs)
           {                               // Wichtungen anpassen (Delta-Regel)
              Neuron0[j].w[i] = Neuron0[j].w[i]+ (lf *Neuron0[j].in[i]*(sollOut-Neuron0[j].out));
           }

           if (sollOut!=Neuron0[j].out)    // Schwelle anpassen (Delta-Regel, erweitert)
           {
              Neuron0[j].th = Neuron0[j].th + (lf *(sollOut-Neuron0[j].out));
           }
        //...................................................
      } // if (sollOut!=Neuron0[j].out)

   } // for j


 } while (ErrorCount>0);

PlaySound(soundUpwardTones);
PlaySound(soundUpwardTones);
}

//**********************************************************************
//**********************************************************************

int IOpattern[(1<<ni)][L1];      // fix 471-002 [2<<(ni-1)] -> [(1<<ni)]
//**********************************************************************


void LearnBackpropagation() {    // Backpropagation-Lern-Modus
                                 // 1 verdeckte/Eingabe-Schicht(L0) +  1 Ausgabe-Schicht(L1)
  int idummy;

  int count;
  int maxCount=2000;

  int in;         // angelegtes Sensor/Input-Muster(Pattern)
  int i;          // Zaehler Inputs
  int j, k;       // Index fuer Ausgabe-Neurons L1
  int m;          // Index fuer Eingabe-Neurons L0


  float f;        // Fehler (sollout-out )
  float f_sig1;   // Fehler-Signal Schicht 1 zum Lernen v. Wichtung und Schwelle
  float f_sig0;   // Fehler-Signal Schicht 0 zum Lernen v. Wichtung und Schwelle

  float f_sum=0;  // Fehler verdeckte Schicht (Summe (Wichtung*Fehlersignal))
  float out;      // Neuron-out, Dummy;
  float fehler=0; // Summe der Fehlersignale
  float epsilon;  // max. zulssiger Netz-Fehler

  float delta_w0,  delta_w1;  // Aenderungswert der Wichtung
  float delta_th0, delta_th1; // Aenderungswert des Schwellwerts

  bool LearnModeAuto=false; // automat. Lernen oder manuell per Taste


  count=maxCount;
  epsilon=(float)(ni*L1)*0.1; // max. 10% Fehler

  do {
   fehler=0;
   count-=1;

   if (!LearnModeAuto)  PlaySound(soundBeepBeep);
    else PlaySound(soundBlip);

    for (in=0; in < (1<<ni); in++)    // in = 1. bis letztes Eingabe-Neuron
   {
     SetInputPattern(in);        // Eingabe-Muster anlegen

     wait1Msec(CalcTime);        // durch alle Schichten durchrechnen

     for (j=0;j<L1;j++)          // j = 1. bis letztes Ausgabe-Neuron
     {
//=====================================================================================

        if (!LearnModeAuto)                   // einlesen per Tastatur
       {

         sollOut=0;
         MenuText="-- <<  ok  >> ++";
         printXY(0,15, "soll:");
         printXY(48+(j*25),15,"%2.0f", sollOut);
         do
         {
           key=getkey();
           if (key==4) {                     // Lern-Schritt ueberspringen
             IOpattern[in][j]=-99;
             key=0;
             printXY(48+(j*25),15,"   ");
             goto _NEXT;
           }  // if key

           if (key==1) {   if (sollOut==1) sollOut=0;  }
           else
           if (key==3) { if (sollOut==0) sollOut=1;  }

           IOpattern[in][j]=sollOut;        // IO-Muster in IO-array schreiben

           printXY(0,15, "soll:");
           printXY(48+(j*25),15,"%2.0f", sollOut);
           wait1Msec(100);
        } while ((key!=2));
      }
//=====================================================================================
      else                                  // autom. einlesen per IO-array
      {
          PlaySound(soundBlip);
         if (IOpattern[in][j]!=-99)
         {  sollOut=IOpattern[in][j];}
         else
         break;

         printXY(48+(j*25),15,"%2.0f", sollOut);
      }
//=====================================================================================

      PlaySound(soundBlip);
      wait1Msec(CalcTime);
      println(6, " ");

    // 1. Schritt: Fehlersignal Ausgabeschicht bestimmen
    // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

      out=Neuron1[j].out;
      f= (sollOut-out);
      if ((abs(f))>=0.99999)         // Korrektur bei extremem I/O-Fehler
        { out*=0.9;  sollOut*=0.9;}
      f_sig1=out*(1-out)* f;         // Fehler-Signal (j) fuer Ausgabeschicht

      Neuron1[j].d=f_sig1;                      // im Neuron1[j] speichern

      fehler=fehler + abs(sollOut-out);         // Gesamtfehler aller Ausgabeneuronen

   _NEXT:

    } // for j= 1. bis letztes Ausgabe-Neuron



     // 2. Schritt: Fehlersignale verdeckte/Eingabe-Schicht bestimmen
     // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

   for (k=0;k<L1;k++)    // k = 1. bis letztes Neuron in der Eingabe-Schicht

   {
         if (IOpattern[in][k]==-99) continue;

         f_sig1=Neuron1[k].d;                          // Fehlersignal des Nachfolger-(Ausgabe)-Neurons (k)

         f_sum=0;
         for (m=0; m<L0; m++)
        {  f_sum=f_sum + (Neuron1[k].w[m] * f_sig1);  }    // Summe ueber alle (Wichtungen(L1)*FSignale(L1)

        out=Neuron1[k].out;
        f_sig0 = out * (1-out) * f_sum;                    // Fehlersignal Eingabe/verdeckte Schicht L0

        for (m=0; m<L0; m++)
        {
           Neuron0[m].d    = f_sig0;                        // Fehlersignal im Neuron0 speichern
        }

     // 3. Schritt: neue Wichtungen und Schwellenwerte fuer Ausgabeschicht L1 berechnen
     // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

         for (m=0; m<L0; m++)
        {

          out=Neuron0[m].out;                              // Vorgaenger-out
          if (abs(f_sig1)<0.00001) {delta_w1=0;}                         // fix 474.002
          else
          delta_w1  = lf  * out * f_sig1;                  // Aenderungswert fuer Wichtungen Ausgabe-Schicht

          Neuron1[k].w[m] = Neuron1[k].w[m] + delta_w1;    // neue Wichtungen fuer Ausgabe-Schicht L1
          if ((abs(Neuron1[k].w[m])>8)&& (fehler>3)) ResetNeuron(Neuron1[k],1);  // fix 0479.001
        }

        delta_th1 = lf  * f_sig1;                          // Aenderungswert delta_th

        Neuron1[k].th  = Neuron1[k].th - delta_th1;        // neue Schwellenw. fuer Ausgabe-Schicht L1
        if ((abs(Neuron1[k].th)>8)&& (fehler>3)) ResetNeuron(Neuron1[k],1);  // fix 0479.001


     // 4. Schritt: neue Wichtungen und Schwellenwerte fuer Eingabeschicht L0 berechnen
     // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


        for (m=0; m<L0; m++)
        {
           f_sig0 = Neuron0[m].d;
           f_sig1 = Neuron1[k].d;

          for (i=0; i<ni; i++)
          {
             out=Neuron0[m].in[i];                           // Vorgaenger-out = Sensor-out = eigener Input
             delta_w0  = lf  * out * f_sig1;                 // Aenderungswert fuer Wichtungen Eingabe-Schicht

             Neuron0[m].w[i] = Neuron0[m].w[i] + delta_w0;   // neue Wichtungen fuer Eingabe-Schicht L0
             if ((abs(Neuron0[m].w[i])>8)&& (fehler>3)) ResetNeuron(Neuron0[m],1);  // fix 0479.001
          }

          delta_th0 = lf  * f_sig0;                         // Aenderungswert fuer Schellenw. Eingabe-Schicht
          Neuron0[m].th  = Neuron0[m].th - delta_th0;       // neue Schwellenw. fuer Eingabe-Schicht L0
          if ((abs(Neuron0[m].th)>8)&& (fehler>3)) ResetNeuron(Neuron1[m],1);  // fix 0479.001
        }

    } // for k = 1. bis letztes Neuron in der Ausgabe-Schicht

  } // for in=1... (Eingabe-Muster)

  //...................................................
  if (!LearnModeAuto)
  {
     MenuText="Menu manual. auto";
    PlaySound(soundLowBuzzShort);
    do {
      key=getkey();
      if (key==1)    {  return;   }
      if (key==2)    {  LearnModeAuto=false;  }
      if (key==3)    {  LearnModeAuto=true;   }
      if (key==4)    {  return; }
    }
    while (key==0);
    key=0;
  }
  //...................................................
  else
  {
     PlaySound(soundBlip);
     key=getkey();
     if (key!=0)
     {
        goto _ENDE;  // Abbruch durch beliebige Taste
     }
  }
  //...................................................

  if (fehler>4) lf=0.8;      // Lernfaktor anpassen
  else
  if (fehler<1.5) lf=0.4;
  else
  lf=fehler/5;

  //...................................................

  eraseDisplay();
  idummy=(int)(lf*10);
  MenuText=(string)count+" lf=."+(string)idummy;
  MenuText=MenuText+" f="+(string)fehler;

  //...................................................
  if ((count==(maxCount/3))&&(fehler>=(4*epsilon) ))
  { InitAllNeurons(); PlaySound(soundDownwardTones); count=maxCount;}
  //...................................................

 } while ((fehler>epsilon)&&(count>=0));

//...................................................
_ENDE:
if (fehler>epsilon) PlaySound(soundDownwardTones);
else
{ PlaySound(soundUpwardTones); PlaySound(soundUpwardTones);}
println(6, "Weiter: <Taste>");
pause();
println(6, "");
}
//**********************************************************************
//**********************************************************************




//**********************************************************************
// Programmablauf-Steuerung, Menues
//**********************************************************************

int Menu_Recall() {
  eraseDisplay();
  MenuText="<Recall    Clear>";
  println(7, "%s", MenuText);
  println(0, "%s", " Hal "+ Version);
  println(1, "%s", "----------------");
  println(2, "%s", "Reload my brain -");
  println(4, "%s", " Total Recall ?");
  do
  {
     key=getkey();
     if (key==1)    {  return 1;       }
     if (key==2)    {  PlaySound(soundException);  }
     if (key==3)    {  return 3;       }
     if (key==4)    {  PlaySound(soundException);  }

     wait1Msec(100);
  }
  while ((key==0)||(key==2)||(key==4));
}

//------------------------------------------------------

int Menu_Quit() {
  eraseDisplay();
  MenuText="<Quit Sav Resume>";
  println(7, "%s", MenuText);

  do
  {
     key=getkey();
     if (key==1)    {  return 1;       }
     if (key==2)    {  SaveMemory();   }
     if (key==3)    {  return 3;       }
     if (key==4)    {  PlaySound(soundException); }

     wait1Msec(100);
  }
  while ((key==0)||(key==2)||(key==4));
}

//------------------------------------------------------

int Menu_LearnSaveRun() {
  eraseDisplay();
  MenuText="<Learn  Sav  Run>";
  do
  {
     key=getkey();
     if (key==1)    {  return 1;   }
     if (key==2)    {  SaveMemory(); }
     if (key==3)    {  return 3;   }
     if (key==4)    {  PlaySound(soundException);
                      return 4;}

     wait1Msec(100);
  }
  while ((key==0)||(key==2));
}

//**********************************************************************
// Hauptprogramm
//**********************************************************************
int choice;


task main(){
  SensorType(S1)=sensorTouch;
  SensorType(S2)=sensorTouch;
  SensorType(S3)=sensorTouch;

  nVolume=2;


  _START_NEW_:
  choice=Menu_Recall();
  if (choice==1)  { RecallMemory(); } // altes Gedaechtnis laden
  else
  if (choice==3)  { InitAllNeurons();
                    PrepThisNeuralNet();} //  neu initialisieren


  StartTask (DisplayValues);
  StartTask (RefreshLayers);

//============================================================================================

  while(true)
  {
     choice=Menu_LearnSaveRun();        // Haupt-Menue
//============================================================================================
    if (choice==1)
    {
       StopTask(RefreshInputLayer);
       LearnBackpropagation();          // Lern-Modus Backpropagation
    }

//============================================================================================
    if (choice==4)                     // ENDE ?
    {
       StopTask(DisplayValues);
       eraseDisplay();

       MenuText="<Quit    Resume>";
      println(7, "%s", MenuText);

      choice=0;
      choice=Menu_Quit();

       if (choice==1) {  println(4, "      E N D");
                         wait1Msec(500);
                         StopAllTasks();
                         goto _END_; }

       if (choice==3) {  StartTask(DisplayValues);
                         goto _START_NEW_;

    }
    while ((choice==0));
   }
//============================================================================================

    MenuText="Menue: [ESC]";           // Run-Modus
    PlaySound(soundFastUpwardTones);
    StartTask (RefreshInputLayer);
    do
    {
       key=getkey();
      wait1Msec(100);
    } while (key!=4);
  }
//============================================================================================

  _END_:

}


_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Last edited by Ford Prefect on Sat Mar 21, 2009 6:21 pm, edited 15 times in total.



Mon Jun 16, 2008 3:18 pm
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post 
and last, but not least, now I proudly present an
Elman Net
with an additional 3rd layer, which feeds the old outputs of the hidden layer back to it's inputs.
It can be trained by backpropagation, too.

Image

the code:

Code:
// Lernfaehiges 3-schichtiges Neuronales Netz
// Elman Netz mit 3 Sensor-Eingaengen (Touch an S1, S2, S3)
// an 3 Neurons in der verdeckten Schicht,
// Elman-Rueckkopplungsschicht mit ebenfalls 3 Neurons
// dann 2 Ausgabe-Neurons mit 2 Outputs (Anzeige auf dem Display)
// (c) H. W. 2008
   string Version="0.711-001";


#define printXY nxtDisplayStringAt
#define println nxtDisplayTextLine


//**********************************************************************
// Basisdeklarationen fuer Neuronale Netze
//**********************************************************************


const int L0 =  3;    // max. Neurons in Schicht (Layer) 0 (Schicht fuer Inputs)
const int L1 =  2;    // max. Neurons in Schicht (Layer) 1 (Ausgabe Schicht bei 2-schichtigen)
const int L2 =  3;    // max. Neurons in Schicht (Layer) 2 (Rueckkoppluns-Schicht)
const int L3 =  1;    // max. Neurons in Schicht (Layer) 3

const int ns = 3;     // Anzahl Sensoren
const int ni = 6;  // Anzahl Inputs (Sensoren + Rueckkopplung)

float lf     = 0.7;   // Lern-Faktor

int key;              // gedrueckte NXT-Taste
string MenuText="";   // Menue-Steuerung

float sollOut=0;


//**********************************************************************
// Neuron-Struktur
//**********************************************************************

typedef struct{
   float in[ni];    // Einzel-Inputs (Dendriten)
   float w[ni];     // Einzel-Wichtungen (jedes Dendriten)
   float net;       // totaler Input
   float th;        // Schwellenwert (threshold)
   float d;         // delta=Fehlersignal
   float out;       // Output (Axon): z.B. 0 oder 1
} tNeuron;

//**********************************************************************

tNeuron Neuron0[L0];  // Neurons-Schicht 0  (Schicht fuer Inputs)
tNeuron Neuron1[L1];  // Neurons-Schicht 1  (Ausgabe Schicht bei 2-schichtigen)
tNeuron Neuron2[L2];  // Neurons-Schicht 2
tNeuron Neuron3[L3];  // Neurons-Schicht 3


//**********************************************************************
//  mathematische Hilfsfunktionen
//**********************************************************************


float tanh(float x)  // Tangens hyperbolicus
{
   float e2x;
   e2x=exp(2*x);
   return((e2x-1)/(e2x+1));
}

//**********************************************************************
// Ein-/ Ausgabefunktionen (Tatstatur, Display)
//**********************************************************************

int buttonPressed(){

  TButtons nBtn;
  nNxtExitClicks=4; // gegen versehentliches Druecken

  nBtn = nNxtButtonPressed; // check for button press
   switch (nBtn)     {
      case kLeftButton: {
           return 1;   break;     }

        case kEnterButton: {
             return 2;   break;     }

        case kRightButton: {
             return 3;   break;     }

        case kExitButton: {
             return 4;   break;     }

        default: {
             return 0;   break;     }
   }
   return 0;
}

//*****************************************

int getkey() {
   int k, buf;

   k=buttonPressed();
   buf=buttonPressed();
  while (buf!=0)
  { buf=buttonPressed(); }
  return k;
}

//**********************************************************************

task DisplayValues(){
  int i;  // input number  = Sensoren an verdeckter Schicht
  int j;  // neuron number = Outputs an Ausgabe-Schicht

   while(true) {

    printXY( 0, 55, " out(0) | out(1)");
    printXY(48, 47, "|");
    printXY(48, 39, "|");
    printXY(48, 31, "|");

    printXY( 0, 63, "IN:");                        //  OBEN
    printXY(15, 63, "%2.0f", Neuron0[0].in[0]);    //  Inputs nebeneinander
    printXY(30, 63, "%2.0f", Neuron0[0].in[1]);
    printXY(45, 63, "%2.0f", Neuron0[0].in[2]);

                                                   //  LINKER BEREICH
    printXY(00,    47, "%3.1f", Neuron1[0].w[0]);  //  Wichtungen Ausgabe-Schicht
      printXY(12,    39, "%3.1f", Neuron1[0].w[1]);  //  (Neuron 0)
      printXY(24,    47, "%3.1f", Neuron1[0].w[2]);

                                                     //  RECHTER BEREICH
      printXY(00+53, 47, "%3.1f", Neuron1[1].w[0]);  //  Wichtungen Ausgabe-Schicht
      printXY(12+53, 39, "%3.1f", Neuron1[1].w[1]);  //  (Neuron 1)
    printXY(24+53, 47, "%3.1f", Neuron1[1].w[2]);

                                                   //  MITTE-UNTEN LINKS
    printXY(18,    31, "%5.2f", Neuron1[0].th);      //  Schwellwert Ausgabe-Neuron 0

    printXY( 0, 31, "th=");                        //  MITTE-UNTEN RECHTS
      printXY(18+53, 31, "%5.2f", Neuron1[1].th);    //  Schwellwert Ausgabe-Neuron 1

      printXY( 0, 23, "OUT");                        //  UNTEN RECHTS (unter Ausgabeschicht)
      printXY(48,    23, "%3.1f", Neuron1[0].out);   //  1. Output
      printXY(48+25, 23, "%3.1f", Neuron1[1].out);   //  2. Output


                                                   //  GANZ UNTEN
    println(7, "%s", MenuText);                    //  Menue-Zeile fuer Tastatur-Steuerung


  }
  return;
}

//**********************************************************************

void Pause() {
   while(true) wait1Msec(50);
}


//**********************************************************************
// File I/O
//**********************************************************************
const string sFileName = "Memory.dat";

TFileIOResult nIoResult;
TFileHandle   fHandle;

int   nFileSize     = (L0 + L1 + L2 + L3 +1)*100;


void SaveMemory()
{
   int i, j;

   CloseAllHandles(nIoResult);
   println(6,"%s","Save Memory...");
   wait1Msec(500);
   PlaySound(soundBeepBeep);
   wait1Msec(11);

   Delete(sFileName, nIoResult);

  OpenWrite(fHandle, nIoResult, sFileName, nFileSize);
  if (nIoResult==0) {
    eraseDisplay();

    for (j=0;j<L0;j++)   {
      for (i=0; i<ni;i++)
      {   WriteFloat (fHandle, nIoResult, Neuron0[j].w[i]); }
        WriteFloat (fHandle, nIoResult, Neuron0[j].th);     }

    for (j=0;j<L1;j++)   {
      for (i=0; i<ni;i++)
      {   WriteFloat (fHandle, nIoResult, Neuron1[j].w[i]); }
        WriteFloat (fHandle, nIoResult, Neuron1[j].th);     }

    for (j=0;j<L2;j++)   {
      for (i=0; i<ni;i++)
      {   WriteFloat (fHandle, nIoResult, Neuron2[j].w[i]); }
        WriteFloat (fHandle, nIoResult, Neuron2[j].th);     }

    for (j=0;j<L3;j++)   {
      for (i=0; i<ni;i++)
      {   WriteFloat (fHandle, nIoResult, Neuron3[j].w[i]); }
        WriteFloat (fHandle, nIoResult, Neuron3[j].th);     }


    Close(fHandle, nIoResult);
    if (nIoResult==0) {
       PlaySound(soundUpwardTones);
       println(6,"%s","Save Memory: OK"); }
    else {
       PlaySound(soundException);
       println(6,"%s","Save Memory: ERROR"); }
  }
  else PlaySound(soundDownwardTones);

}

//*****************************************

void RecallMemory()
{
  int i, j;
   println(6,"%s","Recall Memory");
  CloseAllHandles(nIoResult);
   wait1Msec(500);
   PlaySound(soundBeepBeep);
   wait1Msec(11);

   OpenRead(fHandle, nIoResult, sFileName, nFileSize);
  if (nIoResult==0) {

  for (j=0;j<L0;j++) {
     for (i=0; i<ni;i++)
     { ReadFloat (fHandle, nIoResult, Neuron0[j].w[i]); }
       ReadFloat (fHandle, nIoResult, Neuron0[j].th);     }

  for (j=0;j<L1;j++) {
     for (i=0; i<ni;i++)
     { ReadFloat (fHandle, nIoResult, Neuron1[j].w[i]); }
       ReadFloat (fHandle, nIoResult, Neuron1[j].th);     }

  for (j=0;j<L2;j++) {
     for (i=0; i<ni;i++)
     { ReadFloat (fHandle, nIoResult, Neuron2[j].w[i]); }
       ReadFloat (fHandle, nIoResult, Neuron2[j].th);     }

  for (j=0;j<L3;j++) {
     for (i=0; i<ni;i++)
     { ReadFloat (fHandle, nIoResult, Neuron3[j].w[i]); }
       ReadFloat (fHandle, nIoResult, Neuron3[j].th);     }


    Close(fHandle, nIoResult);
    if (nIoResult==0) PlaySound(soundUpwardTones);
    else {
       PlaySound(soundException);
       println(6,"%s","Recall: ERROR"); }
  }
  else PlaySound(soundDownwardTones);
  eraseDisplay();

}


//**********************************************************************
// Funktionen des neuronalen Netzes
//**********************************************************************

//**********************************************************************
// Inputs
//**********************************************************************

task RefreshInputLayer(){  // Inputs aus Sensorwerten
int i, j;
  while(true){
  for (j=0; j<L0; j++) {   // alle Sensor-Inputs an alle Eingangs-Neuronen
    for (i=0; i<ns; i++)   {
      Neuron0[j].in[i]=(float)SensorValue(i);
      }
    }
  }
  return;
}

//*****************************************



void SetInputPattern(int i) // Inputs virtuell generiert
{
   int j, n;

   printXY(80, 63, "%d", i);
   for (j=0; j<L0;j++)
  {

     for (n = 0; n <=ni-1; n++)
    {
      Neuron0[j].in[n]= i & 1;
      i >>= 1;
    }
  }
}




//**********************************************************************
// Propagierungsfunktionen: Eingaenge gewichtet aufsummieren (in -> net)
//**********************************************************************

void netPropag(tNeuron &neur, int max){      // Propagierungsfunktion 1
  int i;                            // kalkuliert den Gesamt-Input (net)
  float s=0;

  for(i=0;i<max;i++){
     s+= (neur.in[i]*neur.w[i]);     // gewichtete Summe
  }
  neur.net=s;
}

void netPropagThr(tNeuron &neur, int max){   // Propagierungsfunktion 2
  int i;                            // kalkuliert den Gesamt-Input (net)
  float s=0;                        // und beruecksichtigt Schwellwert

  for(i=0;i<max;i++){
     s+= (neur.in[i]*neur.w[i]);     // gewichtete Summe
  }
  neur.net=s-neur.th;               // abzueglich Schwellwert
}

//**********************************************************************
// Aktivierungsfunktionen inkl. Ausgabe (net -> act -> out)
//**********************************************************************


void act_01(tNeuron &neur){         // Aktivierungsfunktion 1 T1: x -> [0; +1]
   if (neur.net>=0)                  // 0-1-Schwellwertfunktion
      {neur.out=1;}                  // Fkt.-Wert: 0 oder 1
   else {neur.out=0;}
}

void actIdent(tNeuron &neur){       // Aktivierungsfunktion 2 T2: x -> x
   neur.out=neur.net;                // Identitaets-Funktion
}                                   // Fkt-Wert: Identitaet


void actFermi(tNeuron &neur){       // Aktivierungsfunktion 3 T3: x -> [0; +1]
  float val;                       // Fermi-Fkt. (Logistisch, differenzierbar)
  float c=6.0;                     // c= Steilheit, bei c=1: flach,

  val= (1/(1+(exp(-c*neur.net))));  // c=10: Sprung zwischen x E [-0.1; +0.1]
  neur.out=val;
}

void actTanH(tNeuron &neur){        // Aktivierungsfunktion 4 T4: x -> [-1; +1]
   float val;                       // Tangens Hyperbolicus, differenzierbar
   float c=2.0;                     // c= Steilheit, bei c=1: flach
  val= tanh(c*neur.net);            // c=3: Sprung zwischen x E [-0.1; +0.1]
  neur.out=val;
}



//**********************************************************************
// Reset / Init
//**********************************************************************

void ResetNeuron(tNeuron &neur, int rand){ // alles auf Null bzw. randomisiert
   int i;



   for (i=0; i<ni; i++) {
      neur.in[i]=0;                  // Einzel-Input (Dendrit)
     if ((rand==0) || (i>=ns))
        {neur.w[i]=1;}              // Einzel-Wichtung (Dendrit)=1
      else
      neur.w[i]=-1.0+random(10)*0.2;   // Einzel-Wichtung (Dendrit) randomomisiert

   }
   neur.net=0;                       // totaler Input
   if (rand==0)
     {neur.th=0;}                    // Schwellenwert (threshold)=0
   else
   neur.th=-1.0+random(10)*0.2;        // Schwellenwert (threshold) randomomisiert

   neur.out=0;                       // errechneter Aktivierungswert=Output
   }

//*****************************************

void InitAllNeurons(){             // alle Netz-Neurons resetten
   int j;                           // (0 oder randomisiert)

  for (j=0; j<L0; j++) {           // Neuron-Schicht 0 (mit Inputs): random.
        ResetNeuron(Neuron0[j],1);}

  for (j=0; j<L1; j++) {           // Neuron-Schicht 1 (mit Outputs): random.
        ResetNeuron(Neuron1[j],1);}

  for (j=0; j<L2; j++) {           // Neuron-Schicht 2 (Rueckkoppl.): w=1, th=0;
        ResetNeuron(Neuron2[j],0);}

  for (j=0; j<L3; j++) {           // Neuron-Schicht 3
        ResetNeuron(Neuron3[j],0);}
}

//*****************************************

void PrepThisNeuralNet()  // for testing
{
   ; // defaults
}


//**********************************************************************
// einzelne Neurons schichtenweise durchrechnen
//**********************************************************************

task RefreshLayers(){
  int j, k;


  while(true){

     for (j=0;j<L2;j++) {                         // Layer 2: Elman-Schicht
        Neuron2[j].in[0]=Neuron0[j].out;           // Eingang Elman Neuron2 <- Neuron0-Output
       Neuron2[j].out=Neuron2[j].in[0];           // Elman-Input/Output: Identitaet

       for (k=0;k<L0;k++)                         // an jedes k. Neuron der L0-Schicht:
      { Neuron0[k].in[ns+j] = Neuron2[j].out; }  // Ruekkopplung von allen Elman Neuronen (L2)
    }

     for (j=0;j<L0;j++) {                         // Layer 0: verdeckt
       netPropagThr(Neuron0[j], ni);              // alle Inputs= ni = Sensoren (ns) + Rueckkopplungen (L2)
      actFermi(Neuron0[j]);                      // Aktivierung T: Fermi-Funktion -> out

      for (k=0;k<L1;k++)                         // an jedes k. Neuron der L1-Ausgangs-Schicht
      { Neuron1[k].in[j] = Neuron0[j].out; }     // j. Input von k.Neuron1 <- j. Neuron0-Output
    }

    for (j=0;j<L1;j++) {               // Layer 1: Ausgabe-Schicht mit Outputs
      netPropagThr(Neuron1[j], L0);    // Anzahl Inputs = Anzahl Vorgaengerschicht
      actFermi(Neuron1[j]);            // Aktivierung T: Fermi-Funktion -> out
    }

  }
  return;
}

//**********************************************************************
// Lernverfahren
//**********************************************************************


void LearnPerceptronRule() {         // Perceptron-Lern-Modus
  int ErrorCount;
  int in;      // Sensor-Kombinationen
  int i;       // Anzahl Inputs
  int j;       // Anzahl Ausgabe-Neurons

 do
 {
  ErrorCount=0;
  PlaySound(soundBeepBeep);
  MenuText="-- <<  ok  >> ++";

  for (in=0; in < (2<<(ni-1)); in++)    // in = 1. bis letztes Eingabe-Neuron
  {
     SetInputPattern(in);        // Eingabe-Muster anlegen

     wait1Msec(200);

     for (j=0;j<2;j++)  // 1 bis Anzahl Ausgabe-Neuron
     {

       sollOut=0;
       MenuText="-- <<  ok  >> ++";
       printXY(0,15, "soll:");
       printXY(48+(j*25),15,"%2.0f", sollOut);
      do                        // erzeugten Output berichtigen
      {
         key=getkey();

         if (key==1) {   if (sollOut>0) sollOut-=1;  }
         else
         if (key==3) { if (sollOut< 1) sollOut+=1;  }
        printXY(0,15, "soll:");
         printXY(48+(j*25),15,"%2.0f", sollOut);
        wait1Msec(100);
      } while ((key!=2)&&(key!=4));

      println(5, " ");

      //...................................................
      if (key==4) {                     // Lern-Modus ENDE
         PlaySound(soundException);
         key=0;
         return;
      }
      //....................................................

                                        // Lern-Modus START
      //....................................................
      if (sollOut==Neuron0[j].out    )  // teachOut korrekt
        {
            PlaySound(soundBlip);
         PlaySound(soundBlip);
         wait1Msec(100);
      }
        //....................................................
      if (sollOut!=Neuron0[j].out)      // teachOut falsch
        {
           PlaySound(soundException);
           wait1Msec(100);
        ErrorCount+=1;
           //...................................................
                                        // LERNEN

        for (i=0; i<=L0; i++)           // fuer alle i (Inputs)
           {                               // Wichtungen anpassen (Delta-Regel)
              Neuron0[j].w[i] = Neuron0[j].w[i]+ (lf *Neuron0[j].in[i]*(sollOut-Neuron0[j].out));
           }

           if (sollOut!=Neuron0[j].out)    // Schwelle anpassen (Delta-Regel, erweitert)
           {
              Neuron0[j].th = Neuron0[j].th + (lf *(sollOut-Neuron0[j].out));
           }
        //...................................................
      } // if (sollOut!=Neuron0[j].out)

     } // for ...Inputmuster
   } // for j


 } while (ErrorCount>0);


PlaySound(soundUpwardTones);
PlaySound(soundUpwardTones);
}

//**********************************************************************

int IOpattern[2<<(ns-1)][L1];

void LearnBackpropagation() {    // Backpropagation-Lern-Modus
                                 // 1 verdeckte/Eingabe-Schicht(L0) +  1 Ausgabe-Schicht(L1)
  int idummy;

  int count;
  int in;         // angelegtes Sensor/Input-Muster(Pattern)
  int i;          // Zaehler Inputs
  int j, k;       // Index fuer Ausgabe-Neurons L1
  int m;          // Index fuer Eingabe-Neurons L0


  float f_sig1;   // Fehler-Signal Schicht 1 zum Lernen v. Wichtung und Schwelle
  float f_sig0;   // Fehler-Signal Schicht 0 zum Lernen v. Wichtung und Schwelle

  float f_sum=0;  // Fehler verdeckte Schicht (Summe (Wichtung*Fehlersignal))
  float out;      // Neuron-out, Dummy;
  float fehler=0; // Summe der Fehlersignale
  float epsilon;  // max. zulssiger Netz-Fehler

  float delta_w0,  delta_w1;  // Aenderungswert der Wichtung
  float delta_th0, delta_th1; // Aenderungswert des Schwellwerts

  bool LearnModeAuto=false; // automat. Lernen oder manuell per Taste


  count=299;
  epsilon=(float)(ni*L1)*0.1; // max. 10% Fehler

  do {
   fehler=0;


   //MenuText="-- <<  ok  >> ++";
   count-=1;

   if (!LearnModeAuto)  PlaySound(soundBeepBeep);
    else PlaySound(soundBlip);

    for (in=0; in < (1<<ns)-1; in++)    // in = 1. bis letzter Eingabe-Sensor; ns = Anzahl Sensoren
   {
     SetInputPattern(in);        // Eingabe-Muster anlegen

     wait1Msec(200);             // durch alle Schichten durchrechnen



     for (j=0;j<L1;j++)          // j = 1. bis letztes Ausgabe-Neuron
     {
//=====================================================================================
       if (!LearnModeAuto)                   // einlesen per Tastatur
       {

         sollOut=0;
         MenuText="-- <<  ok  >> ++";
         printXY(0,15, "soll:");
         printXY(48+(j*25),15,"%2.0f", sollOut);
         do
         {
           key=getkey();
           if (key==4) {                     // Lern-Schritt ueberspringen

             IOpattern[in][j]=-99;
             key=0;
             goto NEXT_INPUT;
           }  // if key

           if (key==1) {   if (sollOut==1) sollOut=0;  }
           else
           if (key==3) { if (sollOut==0) sollOut=1;  }

           IOpattern[in][j]=sollOut;        // IO-Muster in IO-array schreiben

           printXY(0,15, "soll:");
           printXY(48+(j*25),15,"%2.0f", sollOut);
           wait1Msec(100);
        } while ((key!=2)&&(key!=4));
      }
//=====================================================================================
      else                                  // autom. einlesen per IO-array
      {
          PlaySound(soundBlip);
         if (IOpattern[in][j]!=-99)
         {  sollOut=IOpattern[in][j];}
         else
         goto NEXT_INPUT;   // -99 => Wert nicht trainieren!

         printXY(48+(j*25),15,"%2.0f", sollOut);
      }
//=====================================================================================

      wait1Msec(200);  // time to calculate net,
      println(6, " ");


      if (!LearnModeAuto)
      {                                  // Lern-Modus START
        //....................................................
        if (sollOut==Neuron1[j].out  )    // teachOut korrekt
          {
              PlaySound(soundBlip); PlaySound(soundBlip); wait1Msec(100);
           goto NEXT_INPUT;
        }  //
          //....................................................
        else
        //                               // teachOut falsch
          {    PlaySound(soundException); wait1Msec(100);   }
      }




    // 1. Schritt: Fehlersignal Ausgabeschicht bestimmen
    // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      out=Neuron1[j].out;

      f_sig1=out*(1-out)*(sollOut-out);         // Fehler-Signal (j) fuer Ausgabeschicht
      Neuron1[j].d=f_sig1;                      // im Neuron1[j] speichern

      fehler=fehler + abs(sollOut-out);         // Gesamtfehler aller Ausgabeneuronen



     } // for j= 1. bis letztes Ausgabe-Neuron


     // 2. Schritt: Fehlersignale verdeckte/Eingabe-Schicht bestimmen
     // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

     for (k=0;k<L1;k++)    // k = 1. bis letztes Neuron in der Eingabe-Schicht

     {
         f_sig1=Neuron1[k].d;                          // Fehlersignal des Nachfolger-(Ausgabe)-Neurons (k)

         f_sum=0;
         for (m=0; m<L0; m++)
        {  f_sum=f_sum + (Neuron1[k].w[m] * f_sig1);  }    // Summe ueber alle (Wichtungen(L1)*FSignale(L1)

        out=Neuron1[k].out;
        f_sig0 = out * (1-out) * f_sum;                    // Fehlersignal Eingabe/verdeckte Schicht L0

        for (m=0; m<L0; m++)
        {
           Neuron0[m].d    = f_sig0;                        // Fehlersignal im Neuron0 speichern
        }

     // 3. Schritt: neue Wichtungen und Schwellenwerte fuer Ausgabeschicht L1 berechnen
     // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

         for (m=0; m<L0; m++)
        {
           out=Neuron0[m].out;                                // Vorgaenger-out
           delta_w1  = lf  * out * f_sig1;                    // Aenderungswert fuer Wichtungen Ausgabe-Schicht

           Neuron1[k].w[m] = Neuron1[k].w[m] + delta_w1;      // neue Wichtungen fuer Ausgabe-Schicht L1
        }

        delta_th1 = lf  * f_sig1;                            // Aenderungswert delta_th
        Neuron1[k].th  = Neuron1[k].th - delta_th1;          // neue Schwellenw. fuer Ausgabe-Schicht L1

     // 4. Schritt: neue Wichtungen und Schwellenwerte fuer Eingabeschicht L0 berechnen
     // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


        for (m=0; m<L0; m++)
        {
           f_sig0 = Neuron0[m].d;

          for (i=0; i<ni; i++)  // ni= Gesamt-Anzahl aller Neuron-Eingaenge (fuer Sensoren + Elman-Neuronen)
          {
             out=Neuron0[m].in[i];                           // Vorgaunger-out = Sensor-out = eigener Input
             delta_w0  = lf  * out * f_sig1;                 // Aenderungswert fuer Wichtungen Eingabe-Schicht

             Neuron0[m].w[i] = Neuron0[m].w[i] + delta_w0;   // neue Wichtungen fuer Eingabe-Schicht L0
          }

          delta_th0 = lf  * f_sig0;                         // Aenderungswert fuer Schellenw. Eingabe-Schicht
          Neuron0[m].th  = Neuron0[m].th - delta_th0;       // neue Schwellenw. fuer Eingabe-Schicht L0
        }
        NEXT_INPUT:
        ;

      } // for k = 1. bis letztes Neuron in der Ausgabe-Schicht

    } // for in=1... (Eingabe-Muster)

  if (!LearnModeAuto)
  {
    MenuText="Menu manual. auto";

    do {
      key=getkey();
      if (key==1)    {  return;   }
      if (key==2)    {  LearnModeAuto=false;  }
      if (key==3)    {  LearnModeAuto=true;   }
      if (key==4)    {  return; }
    }
    while (key==0);
    key=0;
  }

  if (!LearnModeAuto)
  {
     PlaySound(soundBlip);
     key=getkey();
     if (key!=0) goto _ENDE;    // Abbruch durch beliebige Taste

  }

  if (fehler>4) lf=0.8;
  else
  lf=fehler/5;
  idummy=(int)(lf*10);
  eraseDisplay();
  MenuText=(string)count+" "+(string)idummy;
  MenuText=MenuText+" "+" f="+(string)fehler;



 } while ((fehler>epsilon)&&(count>=0));

_ENDE:

PlaySound(soundUpwardTones);
PlaySound(soundUpwardTones);

key=getkey();
MenuText= "   ";
}




//**********************************************************************
// Programmablauf-Steuerung, Menues
//**********************************************************************

int Menu_Recall() {
  eraseDisplay();
  MenuText="<Recall    Clear>";
  println(7, "%s", MenuText);
  println(0, "%s", " Hal "+Version);
  println(1, "%s", "----------------");
  println(2, "%s", "Reload my brain -");
  println(4, "%s", " Total Recall ?");
  do
  {
     key=getkey();
     if (key==1)    {  return 1;   }
     if (key==2)    {  PlaySound(soundException);   }
     if (key==3)    {  return 3;   }
     if (key==4)    {  PlaySound(soundException); }

     wait1Msec(100);
  }
  while ((key==0)||(key==2)||(key==4));
}



int Menu_LearnSaveRun() {
  eraseDisplay();
  MenuText="<Learn  Sav  Run>";
  do
  {
     key=getkey();
     if (key==1)    {  return 1;   }
     if (key==2)    {  SaveMemory(); }
     if (key==3)    {  return 3;   }
     if (key==4)    {  PlaySound(soundException); }

     wait1Msec(100);
  }
  while ((key==0)||(key==2)||(key==4));
}

//**********************************************************************
// Hauptprogramm
//**********************************************************************
int choice;


task main(){
  SensorType(S1)=sensorTouch;
  SensorType(S2)=sensorTouch;
  SensorType(S3)=sensorTouch;

  nVolume=2;
  InitAllNeurons();
  PrepThisNeuralNet();

  choice=Menu_Recall();
  if (choice==1)  { RecallMemory(); } // altes Gedaechtnis laden

  StartTask (DisplayValues);
  StartTask (RefreshLayers);

  while(true)
  {
    choice=Menu_LearnSaveRun();
    if (choice==1)
    {
       StopTask(RefreshInputLayer);
       LearnBackpropagation();          // Lern-Modus
    }
    MenuText="Menue: [ESC]";
    PlaySound(soundFastUpwardTones);
    StartTask (RefreshInputLayer);    // Run-Modus
    do
    {
       key=getkey();
      wait1Msec(100);
    } while (key!=4);
  }

}


_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Wed Jun 25, 2008 8:26 am
Profile
Rookie

Joined: Sat Jul 04, 2009 7:00 am
Posts: 16
Post Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)
I would love to have a french or english version :oops:


Mon Jul 06, 2009 1:10 pm
Profile
Rookie

Joined: Sat Jul 04, 2009 7:00 am
Posts: 16
Post Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)
Please, could you comment in english at least the elman part of your code ? :bigthumb:


Mon Jul 06, 2009 1:29 pm
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)
no, sry, completely impossible ;)

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Mon Jul 06, 2009 2:05 pm
Profile
Moderator
Moderator
User avatar

Joined: Wed Mar 05, 2008 8:14 am
Posts: 3163
Location: Rotterdam, The Netherlands
Post Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)
Yeah, I speak German and I have trouble reading half of those comments.
Quote:
ACHTUNG!
ALLES TURISTEN UND NONTEKNISCHEN LOOKENPEEPERS!
DAS KOMPUTERMASCHINE IST NICHT FÜR DER GEFINGERPOKEN UND MITTENGRABEN! ODERWISE IST EASY TO SCHNAPPEN DER SPRINGENWERK, BLOWENFUSEN UND POPPENCORKEN MIT SPITZENSPARKSEN.
IST NICHT FÜR GEWERKEN BEI DUMMKOPFEN. DER RUBBERNECKEN SIGHTSEEREN KEEPEN DAS COTTONPICKEN HÄNDER IN DAS POCKETS MUSS.
ZO RELAXEN UND WATSCHEN DER BLINKENLICHTEN.

Source: [LINK]

_________________
| Professional Conduit of Reasonableness
| (Title bestowed upon on the 8th day of November, 2013)
| My Blog: I'd Rather Be Building Robots
| ROBOTC 3rd Party Driver Suite: [Project Page]


Mon Jul 06, 2009 3:16 pm
Profile WWW
Rookie

Joined: Sat Jul 04, 2009 7:00 am
Posts: 16
Post Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)
Come on, stop kiding me 8)

I'm just very interested in neural net although I've just started to read some stuffs about it. Ford, I wonder to know if your net would work for continuous inputs (as ultrasonic sensors).
Furthermore, I guess the feedback layer allows your outputs to take in account past inputs values. So suppose your net models a collision avoiding behavior and that it is properly trained, what will be the behavior of the robot if it went into a dead end ? Would it be able to go backward extracting itself form the dead end and then continue its cruise ?
One more question, suppose the robot can track its position (with odometry process and/or compass, gyro sensor etc ...). What kind of architecture do I have to implement to make my robot able to reach a goal position while avoiding obstacles ? Should I add another input (like the relative angle to the goal cap) or a second net and a "merging layer" for output computation ?

Thank you in advance

(I'm gonna buy a french/german dictionnary :mrgreen: )


Tue Jul 07, 2009 5:38 am
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)
hi XTinX,
thx for your interest in the NN.
1st, it was for me just sort of "academic" interest implementing AI on an NXT.

2nd, consider that backpropagation nets need hundreds or or even thousands of learning cycles (e.g., teaching a XOR condition like in my example pattern that Xander tried out). This takes HOURS of automized training or maybe WEEKS for conditionizing and self-training "by doing" (refer to "Skinner Box"). Sometimes the starting initializations lead to dead ends and the learning function divererges with each learning step. You'll have to initialize anew and start a new trainig (for hours or weeks ;) )

3rd, for Elman nets it surely is possible that it learns from properly experienced conditions - but the trainig effort is exponentially larger.

4th, for the use at a NXT-based NN the effort-success-ratio is best using Feed Forward nets. The Trainig is quick (10-20 cycles) though not every possible existing condition can be trained (e.g., XOR conditions never can be trained by them). But they can be approximated, that fits mostly.

5th, for an application like training a labyrinth run it may take 100 Thousands of virtual neurons, but the nxt memory limitates the NNs to maximum 30 neurons. :P

and 6th, yes, analog sensors like ultrasonic, light or gyro sensors can be used as inputs.

Now that it seems that finally the miscalculations by Robotc may have been fixed, I'll try to figure out some useful applications for my NXT-NNs.

But now the next problem is:
I'll need powerful sensor and motor multiplexers for maybe 30 sensors (NN inputs) and 10 motors (NN outputs),
and the best way was to have a NXT RS485 network with 4 NXTs and up to ten 4x muxers (1 at every NXT I/O port) to communicate with the master NXT which runs the Neural Net.
Unfortunately, RobotC hasn't got the C commands you need for such a network (unlike NXC ).

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Tue Jul 07, 2009 1:14 pm
Profile
Rookie

Joined: Sat Jul 04, 2009 7:00 am
Posts: 16
Post Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)
Hi Ford,

I didn't understand why you wanted to have so many sensors and motors (what kind of application or behavior would they be for ?). But anyway, regarding a path finding while obstacle avoiding behavior, why you don't you build a training file (couples of desired outputs versus inputs) by controlling the robot remotely (like a r/c car) and sample the I/O couple values ? Then, you would run the optimization algorithm that calibrates your net. And finally, as the training file won't be perfect, you could still correct the bahavior of the net remotely. Of course the position and the cap of the robot would be inputted in the net as well as the goal position. All the learning mecanism would be supervised but does it matter ?

Cheers


Wed Jul 08, 2009 5:42 am
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)
of course my project is sth like you mentioned.
but recognizing and perceiving the environment by my robot is not possible with just 3 or 4 sensors:

input sensors for thew NN (input layer neurons):
infrared, light, touch sensors and (some day) a Mindsensors Cam to
-disinguish large from tiny obstacles
-recognize other robots a/o humans a/o pets
-avoid obstacles
-approximating target objects
-center target objects by the grabber arm and grab obstacles

output layer/ output neurons:
controlling 2 motors for wheels,
5 motors for grabber arm and hand
2 motors for rotating 2 ultrasonic sensors
1 motor for rotating 1 cam

additional sensors for path finding, navigation and bearing (odometrics, IR, light, compass):
-recognize beacons
-bearing beacons
-navigation by odometric data
-navigation by compass data
-pathfinding e.g. by astar, bug1, bug2

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Wed Jul 08, 2009 7:53 am
Profile
Rookie

Joined: Sat Jul 04, 2009 7:00 am
Posts: 16
Post Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)
Sounds so sweet and ambitious ! :D Sure the nxt is a little too weak to run such program. What kind of net topology have you imagine ??? I wonder how to design such net (number of layers, where to set up close loops etc ...)


Wed Jul 08, 2009 8:21 am
Profile
Guru
User avatar

Joined: Sat Mar 01, 2008 12:52 pm
Posts: 1030
Post Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)
net technology: look at the example from John Hansen (RS485 thread)
otherwise: Aloha net, maybe.

one NXT to run the NN,
one NXT to run the astar
one nxt to run the bug2 and the navigator
and One NXT to rule them all, One NXT to find them, One NXT to bring them all and in the network bind them...;)

_________________
regards,
HaWe aka Ford
#define S sqrt(t+2*i*i)<2
#define F(a,b) for(a=0;a<b;++a)
float x,y,r,i,s,j,t,n;task main(){F(y,64){F(x,99){r=i=t=0;s=x/33-2;j=y/32-1;F(n,50&S){t=r*r-i*i;i=2*r*i+j;r=t+s;}if(S){PutPixel(x,y);}}}while(1)}


Wed Jul 08, 2009 9:52 am
Profile
Rookie

Joined: Sat Jul 04, 2009 7:00 am
Posts: 16
Post Re: First Neural Net implemented on a Lego NXT (C like, ROBOTC)
Sorry Ford but I meant "topology" instead of "technology" and was talking about de neuron net, not the communication net. I don't get it, you said that the number of neuron in the net is limited to 30 on nxt. By the way, I looked at your communication problem with different nxt brick. I think one way to exchange data is to send them in realtime via bluetooth. Each slave NXT sends the states of their sensors in a single data packet every 50ms (for example) without acknowledgement. Just have to take care of the bluetooth latencies.


Thu Jul 09, 2009 7:49 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 16 posts ]  Go to page 1, 2  Next

Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  



Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group.
Designed by ST Software for PTF.