Augmented Reality has been making progress with time since it was invented. You may have watched plenty of Sci-Fi movies like Mission Impossible, Iron Man, etc. that involves appearance of certain real time data right in front display while the heroes are on the go of saving the world. This is not a fictional thing and soon coming into our own reality where common people like you and me are going to use such gadgets supporting the AR; every time while we walk, run or drive, or go inside malls for shopping.
Now the question arises how the eyes are tracked at the first place by the computer. This is very important since in the coming years the display would be with us every now and then that would give us the information and doesn’t mess our mind with plenty of useless data getting stuck everywhere. The location and direction is equally important and at the right moment, i.e. when we are willing to see and not all the time. This is important because even the slightest variation or inaccuracy (such as the presence of another vehicle or cyclist on the way) could lead to fatal road accidents.
The eye movements tracking could be achieved by joint coordination of projection, cameras and some sort of computer vision algorithms that altogether exactly calculate the gaze point and location of the eyes. There are three types of eye movements:
Fixation is the most basic one that involves being at a fixed position where the user watches an object or entity for longer moments without deviating off the target. There comes a point when the image reaches to a part of retina, known as fovea, which gives the clear vision. The image is then processed by the brain and we conclude to something meaningful that makes sense. The computer understands when we actually focus upon one thing at a time through noting the Fixations.
Saccades are the involuntary sudden jumping of eyes from one object to another. Its examples include looking back at the notes written on a paper sheet while typing the same on computer; observing the traffic lights and then the road signs while driving, looking at the grocery store at one time and a bird flying the next moment, reading a bulletin board and suddenly looking at a person walked past behind, etc. While reading anything the eyes must move from left to right and then at the next line following the same pattern. Once the reading is done, these eyes could randomly move to anywhere on the paper. The brain stores everything and this is how a memory is created. Good computer software must be able to detect such kinds of eye movements so that it understands where the eye intends to focus.
Pursuits explain the way the eyes see while focusing on a moving object. If this functionality is accurate the eyes exactly move with the same speed as the object does. This is crucial while driving or performing any sports activity.
Combining Augmented Reality with Eye Tracking
The AR content is usually synchronized with the real world entity. That would be great if you want to know a street address, it would be automatically displayed somewhere nearby that particular street that physically resides. Too much of information only worsens the situation since it overlaps over each word or statement. This overlapping of AR labels will make things unreadable. The main focus is to calculate the eye position and show the AR label in accordance with that whenever necessary and not all the time.
For instance, if someone wants to know the overall calorie content of cereals in a supermarket he could see the detailing about one brand at a time the moment when he/she gazes at a product, instead of all at once which would be just messy. This makes the task easier since he/she doesn’t have to search for ingredients manually written on the packet which is time consuming job.
The initial Augment Reality might make mistakes but will definitely improvise with time. Only the real time testing will make it perfect in its functioning. Only the time will tell when will everything be perfect that would work with utmost accuracy.