Place:
Linz, AT;
Site:
Time's Up Laboratories
Date:
-
Investigating biomechanical reactions in perceptional feedback environments
Report May 29/98
Aims:
investigation of biomechanical reactions under conditions of telepresence in a "loopcontrolled" environment
Methods:
a group of subjects is presented to a situation in which they have to use their biomechanical functions to remote control a telepresence device
Equipment:
INSIGHT Instruments "SOFT"
Nicolas A. Baginsky´s Teleroboter
I-glasses
Powerbook 3400
PC 486
PC 486
Procedure:
Subjects are taken to a comfortable seat, electrodes are attached to their arms, they are shown the camera robot and its motion capabilities, receive a pair of I-glasses and are instructed to experiment with controlling the camera robot via their muscle functions.
Results:
The perceptional feedback in a forced loop-closing environment seemed to cause strong biomechanical reactions on the subjects.
In order to gain control of their perceptions, the subjects seemed to go through various phases:
Initially they seemed to experience difficulties as they tried to use their biomechanics as they normally would in order to turn and move a camera; since the motion results were not as expected a phase of confusion was the immediate result followed by a phase of trying to regain control of their own muscles, which seemed to cause an overreaction on some of the subjects as they complained about muscle soreness. Subjects who initially appeared to have extraordinary physical peculiarities (eye twitchings, nervous ticks e.a.) started to use those features and extended them further for control purposes. Other subjects fell into a state of competitiveness by asking whether they were any better or worse than the last ones. Only few and mainly female subjects appeared to aim for a state of relaxed awareness, focusing on the perception rather than their own biomechanical capabilities and started to playfully approach the telepresence capability as an extension of their body rather than an obstacle.
The environment as a semipublic open environment definitely caused distractory side effects - from the subjects occasional "showmanship" to general nervousness.
A situation where a subjects biomechanical reactions could be measured rather than "observed" might be a next step.
Noted be the one male subject that started to "airdrive" a car, putting his hands on an imaginary steering wheel and the legs onto nonexisting pedals for braking and acceleration. jam, 2/6/98
Report 98/6/10
intermediate report, goals, assumptions and thoughts about the preparations for my final experiment in the "closing the loop" lab at "time's up" in Linz
-by analysing what had happened during the initial experiment at ctl-lab (see text by jam, scheme by nab) we figured we would need a non human entity that should be able to act as a objective observer in coming experiments. since artificial neural networks are my favourite tools for classification of complex real-world data, i suggested to use a Tuevo Kohonen network to fulfil this task. these Kohonen networks, also called SOMs (Self Organising Maps), have the property to classify incoming data without any pre-defined knowledge about the meaning of this data. they are able to generate categories for classification purely from the presented data and the qualities hidden within.
we were then discussing the various possibilities of how to engage such a neural network into the different types of experiments that will be made at ctl-lab this year. what kind of input should be used? what information will probably be available in all the different set-ups yet to come? here is what we came up with:
we will use a Kohonen Network with thirteen input dimensions and two output dimensions.
these are the inputs we are going to use:
- skin resistance measured at left hand pointing finger.
- muscle tonus of right hand middle finger measured at the inside of right fore-arm.
- seven channel fft output of brain wave potentials measured with three electrodes on forehead and processed by an IBVA device and software.
- four channel "the eye" data. "the eye" is a device that measures brightness at four clever chosen locations on a video monitor. this is done with photo resistors attached to the screen. their values are digitised by a basic-stamp and send via serial link to the Kohonen network.
the two-dimensional output layer should have a maximum of 128*128=16384 neurons. that way the classification result of the neural network could for example be interpreted as two-channel midi data (x an y position of the "winner" neuron = note-on, "winner" accuracy = velocity).
in a later discussion we agreed not to immediately apply the network output to the experiment because this would destroy the network's objectivity. coupling the network results to a sound- or other output-device would have an influence on the proband and therefore close the loop and engage the neuronet into the experiment.
still we are going to save the network output together with all the sensor data for later inspection and analysis. another tool for evaluating the experiments or rather the differences between the experimental process with different probands will be the recordings of the graphical output of the networks. for every session the neuronet will be initialised at random. we will choose three of the thirteen input dimensions and display them as a three-dimensional grid that will change its perspectively drawn shape according to the network development. this graphical output will be saved on every learning iteration and later be assembled into quicktime movies. by watching the movies generated by the different test persons we should be able to notice significant differences and to compare the sessions on a very abstract level. NAB, 06/10/98
Report 98/6/16
it finally happened. we managed to conduct the second experiment. last friday, it was so cold and rainy that while first peeping out the door that morning, i had the feeling no single test- or pressperson would show up. i was wrong - right on time the first (and only) journalist arrived. during that friday and also saturday afternoon we had four very different persons "on the couch". this time, during the experiment, the probands did not sit on a chair but lie on a comfortable chaiselonge. their skin conductivity and muscle activity of their middle finger on the unpreferred hand (left hand for right handed and vice versa) was recorded. also brain activity was analysed using Fast Fourier Transformation. two channel of that spectrum analysis (alpha waves and eye activity) were used to control my little camera robot. the camera image was presented to the test person through I-glasses. this live image was also analysed through a very simple device:
four photoresistors were taped to the screen of a video monitor. these analogue signals were digitised by a basic stamp that was connected through a serial link to a PC. all these input channels (thirteen altogether) were connected to a Kohonen network ( i named it HARRIET) that was trained twice with (almost) every person.
in the first session the test person was not given any information. they were only asked to tell us their hypotheses on how the system works. in the second course we explained to them how the connection between their brain and the robot works. a three-dimensional image generated by the kohonen network was recorded for every training step and later assembled into little quicktime movies. we also recorded the entire data stream plus the three channel HARRIET output in midi format and we took screen shots of the skin resistance and muscle-activity curves.
the camera robot was mounted in another room in between two shelves that were filled with all kinds of objects. in some positions the camera got very close to the shelves and the partly nervous camera movements created slightly stressful but also boring visual sensations. the second test person complained about a feeling of "sea-sickness" and we had to end his session after only twelve minutes. his skin conductivity rose in an exponential curve but immediately cooled down when we started talking with him about his experience. he had also participated in our first experiment and he probably suffered from the decreased possibilities to directly control the camera. he also showed extremely strong muscle activity in his "A"-session when he was not informed about the control mechanism.
the two other male participants also displayed rising skin conductivity. only the female test person produced chaotic ups and downs for that parameter, but this might be a result of her hyperactive nature and her personal circumstances. she is an artist and was preparing an opening for the next day. only the last visitor did not try to control the movements of the robot with force. he simply relaxed and by doing so allowed his brain and the robot to float in all directions. he is also an artist and a very "cool" person in general.
all participants commented or even complained about the images in some way (too boring, too hectic, not motivating enough ...). examples of their hypotheses and comments can also be found on the documentation site. NAB, 06/15/98
Aims:
investigation of biomechanical reactions under conditions of telepresence in a "loopcontrolled" environment
Methods:
a group of subjects is presented to a situation in which they have to use their biomechanical functions to remote control a telepresence device
Equipment:
INSIGHT Instruments "SOFT"
Nicolas A. Baginsky´s Teleroboter
I-glasses
Powerbook 3400
PC 486
PC 486
Procedure:
Subjects are taken to a comfortable seat, electrodes are attached to their arms, they are shown the camera robot and its motion capabilities, receive a pair of I-glasses and are instructed to experiment with controlling the camera robot via their muscle functions.
Results:
The perceptional feedback in a forced loop-closing environment seemed to cause strong biomechanical reactions on the subjects.
In order to gain control of their perceptions, the subjects seemed to go through various phases:
Initially they seemed to experience difficulties as they tried to use their biomechanics as they normally would in order to turn and move a camera; since the motion results were not as expected a phase of confusion was the immediate result followed by a phase of trying to regain control of their own muscles, which seemed to cause an overreaction on some of the subjects as they complained about muscle soreness. Subjects who initially appeared to have extraordinary physical peculiarities (eye twitchings, nervous ticks e.a.) started to use those features and extended them further for control purposes. Other subjects fell into a state of competitiveness by asking whether they were any better or worse than the last ones. Only few and mainly female subjects appeared to aim for a state of relaxed awareness, focusing on the perception rather than their own biomechanical capabilities and started to playfully approach the telepresence capability as an extension of their body rather than an obstacle.
The environment as a semipublic open environment definitely caused distractory side effects - from the subjects occasional "showmanship" to general nervousness.
A situation where a subjects biomechanical reactions could be measured rather than "observed" might be a next step.
Noted be the one male subject that started to "airdrive" a car, putting his hands on an imaginary steering wheel and the legs onto nonexisting pedals for braking and acceleration. jam, 2/6/98
Report 98/6/10
intermediate report, goals, assumptions and thoughts about the preparations for my final experiment in the "closing the loop" lab at "time's up" in Linz
-by analysing what had happened during the initial experiment at ctl-lab (see text by jam, scheme by nab) we figured we would need a non human entity that should be able to act as a objective observer in coming experiments. since artificial neural networks are my favourite tools for classification of complex real-world data, i suggested to use a Tuevo Kohonen network to fulfil this task. these Kohonen networks, also called SOMs (Self Organising Maps), have the property to classify incoming data without any pre-defined knowledge about the meaning of this data. they are able to generate categories for classification purely from the presented data and the qualities hidden within.
we were then discussing the various possibilities of how to engage such a neural network into the different types of experiments that will be made at ctl-lab this year. what kind of input should be used? what information will probably be available in all the different set-ups yet to come? here is what we came up with:
we will use a Kohonen Network with thirteen input dimensions and two output dimensions.
these are the inputs we are going to use:
- skin resistance measured at left hand pointing finger.
- muscle tonus of right hand middle finger measured at the inside of right fore-arm.
- seven channel fft output of brain wave potentials measured with three electrodes on forehead and processed by an IBVA device and software.
- four channel "the eye" data. "the eye" is a device that measures brightness at four clever chosen locations on a video monitor. this is done with photo resistors attached to the screen. their values are digitised by a basic-stamp and send via serial link to the Kohonen network.
the two-dimensional output layer should have a maximum of 128*128=16384 neurons. that way the classification result of the neural network could for example be interpreted as two-channel midi data (x an y position of the "winner" neuron = note-on, "winner" accuracy = velocity).
in a later discussion we agreed not to immediately apply the network output to the experiment because this would destroy the network's objectivity. coupling the network results to a sound- or other output-device would have an influence on the proband and therefore close the loop and engage the neuronet into the experiment.
still we are going to save the network output together with all the sensor data for later inspection and analysis. another tool for evaluating the experiments or rather the differences between the experimental process with different probands will be the recordings of the graphical output of the networks. for every session the neuronet will be initialised at random. we will choose three of the thirteen input dimensions and display them as a three-dimensional grid that will change its perspectively drawn shape according to the network development. this graphical output will be saved on every learning iteration and later be assembled into quicktime movies. by watching the movies generated by the different test persons we should be able to notice significant differences and to compare the sessions on a very abstract level. NAB, 06/10/98
Report 98/6/16
it finally happened. we managed to conduct the second experiment. last friday, it was so cold and rainy that while first peeping out the door that morning, i had the feeling no single test- or pressperson would show up. i was wrong - right on time the first (and only) journalist arrived. during that friday and also saturday afternoon we had four very different persons "on the couch". this time, during the experiment, the probands did not sit on a chair but lie on a comfortable chaiselonge. their skin conductivity and muscle activity of their middle finger on the unpreferred hand (left hand for right handed and vice versa) was recorded. also brain activity was analysed using Fast Fourier Transformation. two channel of that spectrum analysis (alpha waves and eye activity) were used to control my little camera robot. the camera image was presented to the test person through I-glasses. this live image was also analysed through a very simple device:
four photoresistors were taped to the screen of a video monitor. these analogue signals were digitised by a basic stamp that was connected through a serial link to a PC. all these input channels (thirteen altogether) were connected to a Kohonen network ( i named it HARRIET) that was trained twice with (almost) every person.
in the first session the test person was not given any information. they were only asked to tell us their hypotheses on how the system works. in the second course we explained to them how the connection between their brain and the robot works. a three-dimensional image generated by the kohonen network was recorded for every training step and later assembled into little quicktime movies. we also recorded the entire data stream plus the three channel HARRIET output in midi format and we took screen shots of the skin resistance and muscle-activity curves.
the camera robot was mounted in another room in between two shelves that were filled with all kinds of objects. in some positions the camera got very close to the shelves and the partly nervous camera movements created slightly stressful but also boring visual sensations. the second test person complained about a feeling of "sea-sickness" and we had to end his session after only twelve minutes. his skin conductivity rose in an exponential curve but immediately cooled down when we started talking with him about his experience. he had also participated in our first experiment and he probably suffered from the decreased possibilities to directly control the camera. he also showed extremely strong muscle activity in his "A"-session when he was not informed about the control mechanism.
the two other male participants also displayed rising skin conductivity. only the female test person produced chaotic ups and downs for that parameter, but this might be a result of her hyperactive nature and her personal circumstances. she is an artist and was preparing an opening for the next day. only the last visitor did not try to control the movements of the robot with force. he simply relaxed and by doing so allowed his brain and the robot to float in all directions. he is also an artist and a very "cool" person in general.
all participants commented or even complained about the images in some way (too boring, too hectic, not motivating enough ...). examples of their hypotheses and comments can also be found on the documentation site. NAB, 06/15/98
Links:
References:
Related publications: