Developer out there,
I have some questions about the ActivityRecognitionClient
, which are not described in detail in the documentation.
1. What kind of machine learning models and/or or what kind of NN or combinations of algorithms are used? Are there more detailed information about this (How many layers/neurons are used, etc.)? In the case of combined algorithms: Are there available flowcharts (Input--> Output)?
2. Which (low power) sensors/input-data and possible other phone states (battery/charging state, etc.) are specifically used for training and classification?
According to this page and this table low power sensors are the following composite sensors:
• Geomagnetic rotation vector
• Significant motion
• Step counter • Step detector
• Tilt detector
Are only the composite sensors (e.g. step counter, which indirectly accesses the acceleration sensor) evaluated for the prediction and training of the NN, or is the data itself of the base sensor (e.g. acceleration sensor) directly accessed? Which phone states are taken into account for the prediction?
The number of sensors and their resolution varies from phone to phone. How is robustness guaranteed? Are certain sensor data treated preferentially? What is the procedure for missing data?
3. What about the training data? How was the data collected? (what type of test equipment was used, under what conditions were these collected, what positions and orientations were taken into account, etc....) How many samples were used for the training? How long is the recorded sample length and how high is the resolution of the data?
4. How can the result be influenced? How does the frequency/time interval change depending on the activities carried out? (flow chart available?) What is the fastest possible interval? (lowest value I have reached so far: 5 seconds) Which energy saving option will affect the result and how?