I have a system to determine user behavior during driving a vehicle. While driving, system records driver actions and logs data from various car sensors. Using this data in a supervised learning method, I want to be able to predict which action the driver is about to take in future cases according to sensor readings. My question is, I am not sure which approach is advisable (knn, svm, deep learning etc).
Sensor data consists of numerical readings from several (20 or so) sensors like accelerometers, GPS or resources like fuel quantity. Output from each sensor can be regarded as a feature
User actions can be regarded as an output column. These can be several actions chained together to form a pattern (such as pressing of 5 buttons on car console one after another, or using the car led GUI in similar manner).
Dataset is appended when an action occurs together with its timestamp and sensor readings at that time.
So after a driving session (or several), I will have a sequental (time series) data rows of sensors mapped to a single defined pattern (for example, 5 rows of sensor data mapped to 5 actions that form a pattern). I want to be able to train the system to predict patterns in future drive sessions by using sensor data input retrieved at every second.
Train dataset is not relatively big. Typically, row count is at 1000-2000 actions.
Another approach I thought of is to reduce rows of each pattern to a single one. For example, instead of 5 rows for a 5-action pattern, i may have a single row and 3 seperate columns (features) of each sensor: min/max/median, mapped to a single pattern. This eliminates time series/sequential aspect of the dataset.
As I said, my main concern is which method to use. Since dataset is small, deep learning may not be the best approach. A more simple neural network method may be used, but i dont know a lot about the specifics.