Classification of Pilot Attentional Behavior using Ocular Measures
Abstract. Revolutionary growth in technology has changed the way humans interact with machines. This can be seen in every area, including air transport. For example, countries such as United States are planning to deploy NextGen technology in all fields of air transport. The main goals of NextGen are to enhance safety, performance and to reduce impacts on environment by combining new and existing technologies. Loss of Situation Awareness (SA) in pilots is one of the human factors that affects aviation safety. There has been a significant research on SA indicating that pilot’s perception error leading to loss of SA is a one of the major causes of accidents in aviation. However, there is no system in place to detect these errors. Monitoring visual attention is one of the best mechanisms to determine a pilot’s attention and hence perception of a situation. Therefore, this research implements computational models to detect pilot’s attentional behavior using ocular data during instrument flight scenario and to classify overall attention behavior during instrument flight scenarios.
Keywords: Attention classification, Pilot aituation awareness classification, scan path analysis, knowledge discovery in data, attention focusing, attention blurring.
1.1 Introduction
Air travel is a common mode of transport in the modern era, and considered one of the safest. Even though aviation accidents are not as common as road accidents, associated losses have a greater impact. One civil aircraft accident can claim the lives of hundreds of people and cause millions of dollars of economic loss. Therefore, airlines are bound to abide by strict safety policies and guidelines. Safety breaches by airlines are just one of the causes of aviation accidents.
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service
Other causes include technical faults, human error and environmental conditions (E. Ancel et.al,2005). Past investigations have shown that more than 70% of accidents are caused by human error (S. Shappell and D. Wiegmann, 2009). Given their devastating effects, research into improving safety is a priority in aviation. Inorder to enhance safety, performance and to reduce impacts on environment countries like United States are planning to deploy Next Generation (NextG) technologies in all fields of airtransport. This research investigates the feasibility of improving aviation safety by designing a novel system to monitor pilot visual behaviour and detect possible errors in instrument scan pattern that could potentially lead to loss of pilot Situation Awareness (SA).
From the previous researches, it is evident that ocular measures are effective measures in determining attentional behavior(Thatcher and Kilingaru,2012). Identified attentional behaviours can inturn be used as to detect potential pilot errors. With the ongoing research in embedded eye trackers and technology growth, it can be foreseen that aircraft will include such advanced recording devices in the near future(kilingaru,2013).
Knowledge discovery in data process was used to collect ocular data and extract attention patterns. Flight simulator experiments were conducted with trainee pilots and ocular data were collected using eye trackers. In the absence of readily available classifications of existing data, we developed a feature extraction and decision model based on the observed data and inputs from the subject matter experts. Different attributes from the instrument scan sequence are also used to aggregate and devise models for scoring attention behaviors.
This is a significant step towards detection of perceptual errors in aviation human factors. Based on this model further applications can be developed to assess the performance of trainee pilots by flight instructors during simulator training. Also, the model can be further developed into a SA monitoring and alerting system in future aircrafts and in such way reducing the risk of accidents due to loss of SA.
2.1 Situation Awareness and Attention in Aviation
Situation Awareness(SA) is defined as awareness of all the factors that help in flying an aircraft safely under normal and non-normal conditions (Regal et.al,1998). In aviaation minor deviations and trivial failures may cause major threats over time if not attended to in a timely manner (Sarter and Woods,1991). Therefore, it is important that a pilot should perceive, comprehend and project correctly what he or she has perceived to assess the situation correctly.
Attention is a very important human cognitive function. It enables the human brain to control thoughts and actions at any given time (Oakley,2004). Attending to something is considered the most significant task and has a major impact on performance of other tasks. During any task, when humans attend, they perceive. Perception is saved in memory and translated into understanding, which is used for planning actions. In aviation, a pilot’s level of attentiveness contributes to the overall SA.
Humans use various senses to perceive; however, visual attention is considered the predominant source of perception (Mack and Rock,1998;Lamme,2003). Vision system data are information rich and hence useful in a number of areas. In particular, these data can be highly beneficial in monitoring drivers’ attention (Underwood et.al,2003; Smith et.al,2003; Ji and Yang,2002) and pilots’ attention (Yu et.al,2014; Haslbeck and K. Bengler,2016; Ho et.al,2016). Vision system data can to some extent be detected and related to attention via physiological factors of human eyes.
2.1.1 Physiological Factors
Human errors during driving or flying may occur because of multiple causes, including spatial disorientation, workload, fatigue or inattention. Although there is no system in place to correctly identify the causes before these result in mishaps, there have been research studies focusing on related areas. Monitoring physiological
factors has proved an effective way of measuring possible causes of human error during driving or flying. In the early 1980s, an experiment was conducted to relate differences in heart rate to different levels of workload (Roscoe,1982); however, no exact relationship between heart rate and workload was established because of
the difficulty in defining workload. Nevertheless, the author concluded that pilot activity, task demand and effort did result in varying heart rates. In another experiment conducted to diagnose mental workload of pilots, researchers collected cardiac, eye, brain and subjective data during an actual flight scenario (Hankins and Wilson, 1998). The researchers found eye movements to be a more reliable diagnostic method than heart rate, indicating high visual demand on pilots during flight operations. The results from Electroencephalogram (EEG) did not provide statistically significant results.
Another study investigated brain wave activity associated with a simulated driving task (Craig et.al,201). The study found that the brain loses capacity and slows as a person fatigues. Eye movements and pupil dilation are other popular measures used when monitoring workload, fatigue and attention (Diez et.al,2001; van de Merwe et.al,2012;Fitts et.al,2005); for example, differences in pupil diameter and fixation time, eye movement distance and speed under different levels of mental workload were analysed in (de Greef et.al,2009). The research review shows that as in other operator-driven environments, many behavioural changes in pilots during flight operations can be observed by measuring various physiological parameters, such as heart rates, brain waves, eye movements and facial expression. However, monitoring heart rates and brain waves are intrusive methods and are generally regarded as not feasible to use in real-time situations inside the cockpit, when pilots are operating the aircraft. Although many methods are intrusive, most attentional characteristics can be observed by monitoring pilot eye movements in a non-intrusive way.
2.1.3 Eye Tracking
It is evident that pilots are more prone to misperceptions during poor visual conditions. Although pilots are aware of this, researchers have found that “pilots continue to confidently control their aircraft on the basis of visual information and fail to utilize the instruments right under their noses” (Gibb et.al,2012). It is not only visual misperceptions that play a major role in aviation mishaps but the over confidence of pilots, aswell. Simulators are already in place to help pilots practise instrument scanning. However, training alone has not been able to significantly change pilots’ vulnerability to such mishaps (Gibb et.al,2012). Consequently, there is a need to evaluate trainee pilots’ instrument scanning skills on simulators and also monitor scan patterns during flights. The evaluation of pilots’ scan patterns should help identify mistakes during the training stage and may improve the training. Monitoring pilots’ instrument scans is also important to help reduce in-flight human error considerably.
Capturing a pilot’s eye movements through non-intrusive eye tracking methods is the best way to identify pilot SA behavioural characteristics. Under normal conditions, a person looking at an object for a length of time classified as a gaze will perceive information from that object or area of interest (Rayner and Polatsek,1992). Specific behaviour and possible causes can be identified by observing where, when and what a person is seeing (where seeing is interpreted to mean looking at an object long enough to be defined as a gaze). Therefore, during flight operations, the position and duration of a pilot’s gaze can indicate the pilot’s behaviour at that time. The major task pilots perform during flight is perceiving information from different instruments. It is necessary to maintain the correct timing and proper sequence of instrument scanning throughout the flight. If the correct scan sequence is not followed, pilots may not perceive the required information, or may fail to detect incorrect information, which may lead to loss of SA. Mapping eye movements – glance, gaze and stare to cognitive behaviours is discussed in detail in a previous article (Kilingaru et.al, 2013).
From the flying manuals (FAA, 2012) and inputs from the SMEs, the key instruments that must be scanned during flight are AH, ALT, VSI, TC, ASI and NAV. Distributed attention and perception during an instrument scan are essential for pilots to master. The required instrument scan varies depending on the flight phase, as different instruments play critical roles during each phase of the flight. An anomalous instrument scan pattern can be mapped with erroneous behaviours such as attention focusing, attention blurring and misplaced attention, which are
attentional indicators that a pilot could lose SA (Thatcher and Kilingaru,2012). These indicators are defined
as:
Attention focusing: A sequence of fixations with few or no transitions is considered
fixation on a single instrument and hence indicates attention focusing.
Continuous fixations on a particular instrument in a limited time period are
clustered to identify the instrument being interrogated. Figure #AF shows a
sample fixation pattern on a particular instrument during attention focusing.
Figure #AF
Attention blurring: This behaviour is characterised by a small number of fixations
and increased number of transitions between instruments. The fixation
spans are very short and not sufficient to actually perceive the information.
The pilot is simply glancing at instruments or observing them via peripheral
vision. Figure #AB shows a sample instrument scan pattern during attention
blurring.
Figure #AB
Misplaced attention: This behaviour is characterised by very short fixation spans inside the instrument panel. More time is spent fixating outside the instrument regions in the instrument panel rather than fixating on the relevant instruments. Figure #MA shows a sample scan pattern during the event of misplaced attention.
Fig #MA
To translate fixation data into behaviour patterns, it is necessary to continuously monitor fixations and represent them in digital form. This research study showed that implicit knowledge can be derived by periodically monitoring the position and sequence of fixation data. This time-stamped data stream was analysed to digitally classify pilot behaviour.
3.1 Knowledge Discovery in Data
Data can be conceived of as a set of symbols, but data alone do not convey meaning. To produce useful insights, data need to pass through a series of steps to extract the relevant information and convert it into wisdom. This process is called ‘Knowledge Discovery in Data (KDD)’ and it involves the development of methodologies and tools to help extract wisdom from data. Fundamental purpose of KDD is to reduce a large volume of raw data into a form that is easily understood. The end results are produced in the form of a visualisation or descriptive report or a combination of both. This process is aided by data-mining techniques to discover patterns (Fayyad et.al,1996).
Terminologies of the knowledge discovery process were introduced before 1990. Popular definitions from early studies include (Ackoff,1989; Bellinger et.al,2004;Cleveland,1982; Zeleny,1987):
Data: Data correspond to symbols, such as text or numbers, and are always raw. They have no meaning unless they are associated with a domain or situation in the real world.
Information: Data that are processed and have relational connections so they are more meaningful and useful. Information, as a result of data processing, can provide facts such as ‘who’ did ‘what’, ‘where’ and ‘when’.
Knowledge: Extracted useful patterns are called knowledge. This is used to derive further understanding. New knowledge can be influenced by old knowledge.
Wisdom: This is an evaluated understanding of knowledge. Wisdom comes from analytical processes based on human understanding. Wisdom is nondeterministic and can be used in prediction processes.
The steps used in the KDD process are described below:
Data acquisition: In general, this step involves collection of raw data for processing.
Data pre-processing: Incomplete and inconsistent data are removed from the data set as preparation for further processing. This step can involve removal of outliers and extraneous parameters to clean and reduce the size of the target data set.
Feature extraction and data transformation: Useful features are extracted from the data during feature extraction. Huge set of data is reduced and converted to meaningful information appropriate for recognition algorithms to process as part of data transformation.
Data Mining / Pattern recognition: Information is processed by algorithms to discover new knowledge. The knowledge can be in the form of patterns, or rules, or predictive models.
Evaluation: The knowledge, or pattern, is evaluated to derive useful reports or other outcomes such as predictions or ratings.
3.1.1 Knowledge Discovery Process for Instrument Scan Data
KDD has evolved over time, and in recent years, with a huge amount of data becoming available in every field, there has been much appreciation for KDD. Therefore, research in KDD usually involves overlapping of two or more fields such as artificial intelligence, machine learning, pattern recognition, databases, statistics and data visualisation (Fayyad et.al,1996). This study applied KDD principles on pilot’s instrument scan data. The study established methodology to convert instrument scan data into a sequence of behaviours to identify flight operator attentiveness during instrument flight. Figure #KDD shows how the methodology in the study applied steps of the KDD process. The main steps involved in the process are vision data acquisition, cleansing, ocular gesture extraction, cognitive behaviour recognition through temporal analysis, and behaviour evaluation. The results provide an insight into the attention levels of the operator.
#Figure KDD Process for instrument scan
Instrument Scan Data Acquisition
Instrument scan data were collected using the EyeTribe Tracker (EyeTribe) while participants performed instrument flying scenarios on Prepar3d (Lockheed Martin) flight simulator.
The steps followed included:
Participant briefing: Each participant was briefed about the scenario before each simulator session; for example, details on the departure and landing airports and weather condition settings. For the practise sessions, the participant was asked to perform some of the chosen instrument scans in a known order for the purpose of verifying eye tracker output. After the practise session participants were asked to perform preconfigured scenarios.
Gaze calibration: Calibration is an important step prior to conducting any eye tracking experiment. Calibration involves software set up based on the participant’s eye characteristics and lighting conditions in the area, for improved gaze estimate accuracy. Therefore, in the experiment, the student operator’s eye movements were calibrated with the simulator screen coordinates prior to the first simulator operation. The calibration and verification step involved:
Asking the participant to sit in a comfortable position in front of the simulator.
Adjusting the eye tracker so that the eyes of the participant were detected
and well captured, with both eyes almost at the centre of the green area, as shown in Figure #Calibration 1.
Calibrating eye movements of the participant using the on-screen calibration
points on the simulator monitor. On successful calibration, the EyeTribe tracker shows the calibration rating in stars, as shown in Figure #Calbration 2. Verifying the calibration was done by asking the participant to look at the points and confirm the tracker was detecting the gaze correctly.
Simulator configuration: Prepar3D simulator is configured to launch aircraft in Instrument Flight Rules (IFR) mode, with different departure and destination airports. The participant was asked to perform the instrument flying using just the instrument panel. Weather conditions and failures were preconfigured for different scenarios without the knowledge of participant.
Gaze tracking: Gaze tracking was commenced from the EyeTribe tracker console immediately after the scenario started. Gaze records were saved into a file named after the time stamp. The end result (crash or successful landing) and the simulator configurations for each scenario were also recorded.
Figure #Calibration 1
Figure #Calibration 2
The eye tracker provides data on the gaze coordinates for each frame, the time stamp and the pupil diameter in Java Script Object Notation (JSON) format, as shown below in Figure # JSON.
Figure #JSON
In the sample, ‘category’ and ‘request’ indicate the type of request sent to EyeTribe tracker. A successful request receives a response message with status code 200. ‘Values’ contains the main data with gaze coordinates for each eye, averaged coordinates and time stamp of the current frame and the state indicating which state the tracker is in.
Data Preparation and Cleansing
With eye tracking, there is a possibility of missing frames when eye movement is not captured or corrupted data are captured. To process the data further, the raw Java Script Object Notation (JSON) data were first converted into comma separated values using an online ‘JSON to CSV’ conversion tool (Mill,2018). The raw data were filtered to eliminate corrupted data and interpolate the missing data with approximate values based on the values of the previous and next frames. Invalid frames were eliminated via SQL transformation scripts and missing values were cleaned by applying multiple imputation by chained equation based on average gaze coordinates from the left and right eyes and pupil size.
Mapping Area of Interest (AOI) and Sequential Representation
As the initial step for transforming the raw visual data into information, each frame from the continuous stream of vision data was mapped onto the Area of Interest (AOI) on the flight simulator screen. The following regions were marked as important AOIs for the purpose of this experiment: instruments – Artificial Horizon (AH), Airspeed Indicator (ASI), Turn Coordinator (TC), Vertical Speed Indicator (VSI), Altimeter (ALT) and Navigator (NAV), any other points on the instrument panel and the horizon.
From the gaze data, it is evident that instrument scan path is comprised of a series of gaze data frames. Gaze data in order of time can be represented as a temporal sequence based on AOI transitions. A Finite State Machine (FSM)
recogniser is implemented to represent and track transitions. A state transition model is defined as a directed graph represented by:
G=S,Z,T,S0,F
……………1
Where:
S represents a finite set of states. For the regular instrument scan, S = {S0, S1, S2, . . . , Sn}.
Z represents a set of output symbols. For the current model, these are instruments
such as AH and they trigger transitions from Si to Sj.
T represents a set of transitions {T00, T01, . . . , Tm}, where Tij is the
transition from Si to Sj.
S0 is the initial state.
F is the final state.
Each transition from one state to another was triggered by an event, principally the defined gaze changing from instrument to instrument or another gaze point. Figure #FSM shows various states and the change in instrument fixation as the events that trigger transition from one state to another. Further, the instrument scan for the whole scenario is transformed into a set of state transitions triggered by change of the area of interest.
Figure #StateTransition
Attention Behaviour Identification
The next step in the process is to analyse the gaze data and identify the attentional indicators. In the past, studies have focussed on representation sequences of gaze in different ways; for example, visual representation as a sequence of transitions represented as AOI rivers in (Burch et.al,2013;Kurzhals and Weiskpof,2015). In these approaches, analysis must still be done manually. Since data were collected from the eye tracker at a rate of 30 frames per second, even small scenarios of 20 minutes result in large data sets, making decisions based on visualisation is challenging. Therefore, gaze data was translated into sequences of AOI transitions, further this study investigated the use of methods of sequential pattern mining (Abbott and Hrycak,1990; Kinnerbrew and Biswas,2011) to analyse gaze sequences.
Although common mistakes made by pilots in different situations are listed in flight manuals (FAA,2012), there has been no mention of defined wrong or bad instrument cross check. Therefore, the analysis focused on detecting attentional indicators of Misplaced Attention (MA), Attention Focusing (AF), Attention Blurring (AB) and the Distributed Attention (DA) from the gaze transition sequence.
Behavior Evaluation
The final step in this experiment process is to evaluate the recognised attention indicators as behaviours. To achieve this, the repeated attention patterns are awarded scores, and the scores are aggregated to relatively rank each pattern as good, bad or average. However, the study refrains from classifying scan patterns as good or bad in a general context because of the lack of decisive measures in aviation human factors.
4.1 Simulator Experiment Scenarios and Results
The this section will further describe the experiment with flight simulator scenarios and the relevant results. During the experiments Trainee pilots were briefed on the simulator, but not directed to perform a particular instrument scan. Each participant was asked to use only the six instrument display to perform multiple scenarios. The experiment set-up procedure and calibration process were described in instrment data acquisition section. Some of the scenarios also had failures injected into instruments such as ALT or ASI. The operators were not informed of the failures. Table #Scenarios shows the different scenarios performed by each student.
Table #
Student
Trial
Scenario
Sample Name
Student A
1
Clear Skies
Student A Trial1
Student A
2
Clear Skies, instrument failures
Student A Trial2
Student A
3
Storm dusk, instrument failures
Student A Trial3
Student B
1
Clear Skies
Student B Trial1
Student B
2
Clear Skies, instrument failures
Student B Trial2
Student B
3
Storm dusk, instrument failures
Student B Trial3
Student C
1
Clear Skies
Student C Trial1
Student C
2
Clear skies , instrument failure
Student C Trial2
Student C
3
Clear Skies, instrument failures
Student C Trial3
Student D
1
Clear Skies
Student D Trial1
Student D
2
Clear Skies, instrument failures
Student D Trial2
Student D
3
Storm dusk, instrument failures
Student E
1
Clear skies
Student E
2
Storm dusk, instrument failures
Student E Trial2
Student F
1
Clear Skies
Student F
2
Clear Skies, instrument failures
Student F Trial2
4.1.1 Fixation Distribution Results
To extract the fixation distribution values, averaged left and right eye coordinates and state of the frame were used from the eye tracker output. These values, along with the pre-configured AOIs, were passed as input to a mapping program developed in Java. The program maps the pilot’s gaze to respective instruments. The six instruments in the experiment had AOIs marked as AH, ASI, ALT, NAV, TC, VSI and OTHER, indicating all other areas on the screen. For each scenario, the instrument mapping record was used to create a fixation distribution chart showing percentage fixation on each AOI. The charts were created using Microsoft Business Intelligence service. The percentage fixation distribution charts are shown in Figure #FD Student E and F and #FD Students A to D.
All the participating students had approximately 30 to 40 number of hours of flight operating experience.On observing the fixation distribution from Student E and Student F in Figure #FD Student E and F, it is clear that Student E exhibited better fixation distribution on chosen AOIs. Student E, spent 83–88% of the time gazing at areas other than the six primary instruments. On the other hand Student E spent less than 22% of the time gazing at areas other than the six primary instruments. Further, Student E spent more time scanning AH and ALT.
From the charts in Figure #FD Students A to D, Student A showed totally different fixation distributions during the three different scenarios. Student A spent 71–92% of the time gazing at areas other than the six primary instruments during all three scenarios.
Student C had similar fixation distribution in both Trial 2 and Trial 3. Both the trials had the same simulator scenarios with clear skies and instrument failures. Also, Student C spent more time gazing AH. It is also observed that Student participants spent more time scanning chosen AOI instruments and less time on ‘other’ during scenarios with failures. It appears that most of the student participants maintained similar fixation distributions for the different scenarios. However, fixation distributions extensively vary between different students. In other words, each student tends to follow his/her own individual fixation pattern regardless of the scenario. It was found that using the fixation distribution method to represent instrument scan is not sufficient to identify attentional behaviours. Therefore, the study further investigated the possibility of sequential representation and sequential analysis of instrument scan.
4.1.2 Instrument Scan Path Representation
There are different ways to represent a scan path or gaze trajectories, including but not limited to:
Fixation heat maps: These represent spatial gaze behaviours, highlighting the areas that are visually visited. Areas visited are considered ‘hotter’ than the other areas and represented by indicative colours. If used in scan path comparisons, it is easy to visually comprehend the heat maps. However, there are no clear boundaries between AOIs. Also, the temporal sequences of AOIs are not captured.
String-based representations: In gaze trajectory studies, gaze coordinates are normally mapped onto region names for each frame captured. Therefore, a scan path is temporal with series of region names and hence can be represented in ‘string’ form. With this type of representation, a scan path analysis problem is reduced to a sequence analysis problem. Both temporal and spatial information is preserved in this type of representation. One example is SubsMatch (Kubler et.al,2015), which uses a string-based representation for comparison of scan paths. This algorithm was applied in comparison of complex search patterns by determining transition probabilities for sequences of transitions.
Vector-based representation: This type of representation is numerically fast and easy to process mathematically. Normal measures in vector-based representations are Euclidean distances between fixations and differences between lengths of saccades. The Multi Match (Dewhurst et.al,2012) is an example method using vector-based representation.
Probabilistic methods: These methods are used for scan pattern comparisons when there is a possibility of each sequence containing repetitive tasks. They are also used when there is a possibility of a high level of noise in the sequence. One of the examples of probablistic representation is Hidden Markov Model (HMM) used for represent learning behaviours while comparing high vs low performers (Kinnebrew and Biswas,2011).
In this research, a combination of string based and state transition model was used for the representation of instrument scan sequence. Figure #StateTransition provided an overview of the chosen representation. The scan sequences are then classified into attentional behaviours and rated as poor, average or good.
5.1 Attentional Behaviour Classificatiob and Rating
Attention level of pilots during instrument flying is useful information that can be
derived from the instrument scan sequence. Section #attentionalbehaviour listed attentional error indicators as Misplaced Attention (MA), Attention Blurring (AB) and Attention Focusing (AF). Different methods can be used in classifying the available data, including known classifications and qualitative methods relying on human judgement (Li, 2011).
No previous data with classifications were available that could be matched with the data from this research experiment to meet the required research objectives. Therefore, a supervised classification model could not be used for this study. In the absence of readily available classifications of existing data, this research study developed a feature extraction and decision model based on the observed data and inputs from the Subject Matter Expert (SME).
Further, the study used different attributes from the instrument scan sequence to aggregate and devise models for scoring attention indicators. Figures #AFScore, #ABScore and #MAScore show how an instrument scan sequence segment contributes to the attention score.
Figure #AFScore Attention focus score
Finally, a rating model is used to classify pilot attention based on scan sequence. Of the available set of records, different scan sequences are rated as good, average and bad, by aggregating individual attention errors and attention distribution scores to compute an overall attention score. The attention ratings are defined by rules derived relative to the mean value of the attention scores. The attention scoring model is based on two components: the attention error indicator score and the attention distribution score. The two measures are aggregated to derive the overall attentional score.
One of the attributes of good attention is consistent transition between instrument regions. A higher attention distribution score means that the pilot is able to regularly check different instrument AOIs and has a good attention pattern. This ensures instruments are scanned regularly and in the correct order. Instrument scan requirements vary for each flight manoeuvre; however, this research study considers the six main instruments and a standard threshold interval for each instrument. The score on each attentional indicator is computed over the sequence of transitions as in algorithm #1 and #2. Because the sequences are of varying length, scores are calculated and standardised for each
transition.
Algorithm #1: Misplaced Attention and Attention Blurring Score
Algorithm #2: Attention focusing and Attention Distribution Score
Attention errors indicate lower attentiveness. Therefore, AB, AF and MA scores are inversely proportional to the overall attention rating. However, attention levels should increase with higher values of Attention Distribution (AD) score. Based on the above interpretation, the attention rating is modelled as a function of AD scores and the aggregation of attention error indicator scores.
The formula below is applied to generate the attention rating:
S=AD(AF+AB+MA
S is the overall attention score
AD is the attention distribution score
AF is the attention focusing score
MA is the misplaced attention score
AB is the attention blurring score
The purpose of the attention score is to provide metrics for attention classification. Because there are no predefined metrics and labels in classifying attention during pilot instrument scans, a rule-based engine is defined on the basis of sample observations. Mean of the computed attention scores is calculated and a threshold constant is defined around the mean. The sample with an attention score in the range of the mean threshold is classified as average attention, above the threshold as good attention and below the threshold as poor attention. This method provides the flexibility to rate attention based on the sample data instead of a predefined value.
5.1.1 Results
This section covers attention scores and rating of instrument scan sequences recorded from trainee pilots. The attention rating model developed in Java traversed individual scan sequences and computed the attention error indicator scores and the attention distribution scores for each sequence. The score for each indicator was computed over the sequence of transitions as specified earlier in the section. Because the sequences are of varying lengths, scores were calculated and standardised for each transition sequence. Table #AttentionScores shows the computed attention error indicators and attention distribution.
Table #: Attention indicator scores as certainty factor
Sample Name
Misplaced Attention
Attention Blurring
Attention Focusing
Attention Distribution
Student_A Triall
0.035294118
0.226190476
0.064705882
0.035714286
Student_ATrial2
0.023839398
0.626884422
0.026139691
0.148241206
Student_A Trial3
0.067286652
0.476190476
0.056254559
0.119868637
Student_B Triall
0.023897059
0.307550645
0.069852941
0.060773481
Student_B Trial2
0.006261181
0.076991943
0.1326774
0.00179051
Student_B Trial3
0.008616047
0.098060345
0.132920481
0.012931034
Student_C Triall
0.05046805
0.401058632
0.074684575
0.102605863
Student_C Trial2
0.033639144
0.281776417
0.087410805
0.058192956
Student_C Trial3
0.068697868
0.366353841
0.084124961
0.083651952
Student_D Triall
0.035443038
0.557667934
0.037974684
0.086185044
Student_D Trial2
0.03113325
0.66084788
0.02117061
0.134663342
Student_D Trial3
0.0368
0.461538462
0.047466667
0.120192308
StudentETriall
0.044352044
0.278419593
0.094479094
0.067236599
Student_ETrial2
0.012714207
0.139933628
0.122719735
0.016039823
Student_F Triall
0.062305296
0.40625
0.052959502
0.115625
Student_F Trial2
0.03878976
0.404503106
0.072019653
0.083850932
Observed levels of attention during instrument scan are considered good indicators of Situation Awareness (SA). Therefore, it is implied that attention errors indicate potential loss of SA and lower attention ratings. In contrast, shared attention between AOI indicates a good level of attention. The overall attention score was derived as the attention distribution score over aggregated attention error scores. Each sample was checked to see if the overall attention score stood above or below the decision range computed over the sample arithmetic mean. Then instrument scan patterns were classified as having good, average or poor attention depending on the overall attention score being greater than the decision range, within the decision range or below the decision range respectively.Table #overallattention provides the attention score and classification as ratings of instrument scan sequences.
Table #overallattention: Overall attention score and rating
Sample Name
Attention score
Attention rating
Student_A Triall
0.035545024
poor
Student_A Trial2
0.057962946
good
Student_A Trial3
0.05909799
good
Student_B Triall
0.045903065
average
Student_B Trial2
0.004006455
poor
Student_B Trial3
0.024225496
poor
Student_C Triall
0.059330765
good
Student_C_Trial2
0.046623157
average
Student_C Trial3
0.051693226
good
Student_D Triall
0.037405251
poor
Student_D Trial2
0.049954955
good
Student_D Trial3
0.062262241
good
Student_E Triall
0.053681508
good
Student_E Trial2
0.02307329
poor
Student_F Triall
0.066441038
good
Student_F Trial2
0.048501777
good
It can be observed that samples from Student B (Student B Trial2 and Student B Trial3) have the lowest attention scores, and hence, are categorised as having poor attention. These scenarios also have higher attention focusing scores and lower attention distribution scores compared with other scenarios. Student A Trial3, Student D Trial3 and Student F Trial1 have the top three attention scores. Though Student A Trial3 and Student F Trial1 did not have the top fixation distribution percentage, the instrument scan sequences showed consistent scanning of instruments of interest. Student E Trial1 has the good overall attention rating where as the second trial from the same Student E Trial2 resulted in poor attention rating.
This shows that attention behaviour varies in different scenarios. Although, Student E Trial2 was the had good fixation density distribution as shown Figure #FD, the attention rating is not consistent with the fixation density distribution results. This further strengthens the hypothesis that attention is dependent on the duration and the order of the scan and not only the aggregated fixation duration over a time period.
6.1 Conclusions
The motivation for the experiments discussed in this chapter was to arrive at a reliable measure and method that provide a better mechanism to identify pilot’s attention distribution, and attention error indicators such as Attention Blurring (AB), Attention Focusing (AF) and Misplaced Attention (MA). During the course of the research, it was proved that ocular measures are effective measures in determining attentional behaviour.
The study also highlighted the importance of sequential representation of gaze data and not only the aggregated fixation distribution on AOIs. Attention indicator score models were designed and applied to the sequences to identify various attentional behaviours. It has been observed from the results that attention indicators can overlap during instrument scan. However, using the scoring model helps to determine the frequently exhibited attention indicators. The computation of attention provides a comparative rating of attention within the data set. The attention scores from the data set were categorised as good, average or poor relative to other participants in the group. However, the study refrains from labelling the behaviour as good or poor in general scenarios because, so far in aviation, there has been no clear distinction between expected good attention behaviour and poor attentional behaviour.
There were a few challenges that arose during this study. Currently, there is no standard definition of expected patterns during instrument scan. Additionally, there are no real-time data or known classifications available in the aviation literature. Therefore, the study was based on the recommended instrument scans in instrument flying manuals and input from aviation Subject Matter Experts (SMEs). The six primary instrument scan during instrument flying was used as the case for this thesis. However, the system could be easily extended to include other instruments and additional AOIs. One future extension could involve the development of an expert system that includes other scenarios during instrument scan and integrates the attention scoring and rating algorithms for the purpose of analysis of pilot behaviour.The scope of this study included only ocular measures, as eye tracking is a proven method of detecting visual attention. Along with ocular measures, integration of speech processing or other physiological measures such as facial expressions recognition systems may help in developing a robust futuristic SA monitoring system.
This research investigated the possibility of identifying attention errors but did not attempt to provide feedback to the pilot. However, in the future, a system based on this research could be developed that could monitor pilots’ behaviour in real time, and provide timely feedback and alerts to the pilots, which could prove to be lifesaving.
[1] E. Ancel, A. T. Shih, S. M. Jones, M. S. Reveley, J. T. Luxh A¸j, and J. K.
Evans, “Predictive safety analytics: Inferring aviation accident shaping factors
and causation,” Journal of Risk Research, vol. 18, no. 4, pp. 428–451,
2015.
Essay Writing Service Features
Our Experience
No matter how complex your assignment is, we can find the right professional for your specific task. Contact Essay is an essay writing company that hires only the smartest minds to help you with your projects. Our expertise allows us to provide students with high-quality academic writing, editing & proofreading services.Free Features
Free revision policy
$10Free bibliography & reference
$8Free title page
$8Free formatting
$8How Our Essay Writing Service Works
First, you will need to complete an order form. It's not difficult but, in case there is anything you find not to be clear, you may always call us so that we can guide you through it. On the order form, you will need to include some basic information concerning your order: subject, topic, number of pages, etc. We also encourage our clients to upload any relevant information or sources that will help.
Complete the order formOnce we have all the information and instructions that we need, we select the most suitable writer for your assignment. While everything seems to be clear, the writer, who has complete knowledge of the subject, may need clarification from you. It is at that point that you would receive a call or email from us.
Writer’s assignmentAs soon as the writer has finished, it will be delivered both to the website and to your email address so that you will not miss it. If your deadline is close at hand, we will place a call to you to make sure that you receive the paper on time.
Completing the order and download