The scripts in this directory can be used to train a TensorFlow model that classifies gestures based on accelerometer data. The code uses Python 3.7 and TensorFlow 2.0. The resulting model is less than 20KB in size.
The following document contains instructions on using the scripts to train a model, and capturing your own training data.
This project was inspired by the Gesture Recognition Magic Wand project by Jennifer Wang.
Three magic gestures were chosen, and data were collected from 7 different people. Some random long movement sequences were collected and divided into shorter pieces, which made up “negative” data along with some other automatically generated random data.
The dataset can be downloaded from the following URL:
download.tensorflow.org/models/tflite/magic_wand/data.tar.gz
The following Google Colaboratory notebook demonstrates how to train the model. It's the easiest way to get started:
If you'd prefer to run the scripts locally, use the following instructions.
Use the following command to install the required dependencies:
pip install numpy==1.16.2 tensorflow==2.0.0-beta1
There are two ways to train the model:
Using a random split results in higher training accuracy than a person split, but inferior performance on new data.
$ python data_prepare.py $ python data_split.py $ python train.py --model CNN --person false
Using a person data split results in lower training accuracy but better performance on new data.
$ python data_prepare.py $ python data_split_person.py $ python train.py --model CNN --person true
In the --model
argument, you can can provide CNN
or LSTM
. The CNN model has a smaller size and lower latency.
To obtain new training data using the SparkFun Edge development board, you can modify one of the examples in the SparkFun Edge BSP and deploy it using the Ambiq SDK.
Follow SparkFun's Using SparkFun Edge Board with Ambiq Apollo3 SDK guide to set up the Ambiq SDK and SparkFun Edge BSP.
First, cd
into AmbiqSuite-Rel2.2.0/boards/SparkFun_Edge_BSP/examples/example1_edge_test
.
src/tf_adc/tf_adc.c
Add true
in line 62 as the second parameter of function am_hal_adc_samples_read
.
src/main.c
Add the line below in int main(void)
, just before the line while(1)
:
am_util_stdio_printf("-,-,-\r\n");
Change the following lines in while(1){...}
am_util_stdio_printf("Acc [mg] %04.2f x, %04.2f y, %04.2f z, Temp [deg C] %04.2f, MIC0 [counts / 2^14] %d\r\n", acceleration_mg[0], acceleration_mg[1], acceleration_mg[2], temperature_degC, (audioSample) );
to:
am_util_stdio_printf("%04.2f,%04.2f,%04.2f\r\n", acceleration_mg[0], acceleration_mg[1], acceleration_mg[2]);
Follow the instructions in SparkFun's guide to flash the binary to the device.
First, in a new terminal window, run the following command to begin logging output to output.txt
:
$ script output.txt
Next, in the same window, use screen
to connect to the device:
$ screen ${DEVICENAME} 115200
Output information collected from accelerometer sensor will be shown on the screen and saved in output.txt
, in the format of “x,y,z” per line.
Press the RST
button to start capturing a new gesture, then press Button 14 when it ends. New data will begin with a line “-,-,-”.
To exit screen
, hit +Ctrl\+A+, immediately followed by the +K+ key, then hit the +Y+ key. Then run
$ exit
to stop logging data. Data will be saved in output.txt
. For compatibility with the training scripts, change the file name to include person's name and the gesture name, in the following format:
output_{gesture_name}_{person_name}.txt
Edit the following files to include your new gesture names (replacing “wing”, “ring”, and “slope”)
data_load.py
data_prepare.py
data_split.py
Edit the following files to include your new person names (replacing “hyw”, “shiyun”, “tangsy”, “dengyl”, “jiangyh”, “xunkai”, “lsj”, “pengxl”, “liucx”, and “zhangxy”):
data_prepare.py
data_split_person.py
Finally, run the commands described earlier to train a new model.