Zetar : but, I'm fun ! (At least, ask those on the IRC chan, I always have something to discover regarding babies, with my daughter <img src="smileys/smiley2.gif" border="0" align="middle" />)
Yes I worked with one of those devices, but not a Tobii specifically.
About the glint/identification, they aren't the same.
The glint, used in eyetracking, is from an infrared LED shined through the cornea, bouncing back on the retina, and coming back through the cornea, leaving 4 "dots" (because of the two refractions of the cornea). That lightning blob pattern is then tracked by the eyetracker.
It only works if the head isn't moving, obviously. If the head is free to move, you also need to know the head displacement regarding the camera (a plane of displacement, in fact, parallel to the camera, and three axis of rotation, for the head on the neck).
It's still possible to track that : you need a wide lens camera, working in visible light (a regular webcam, in fact <img src="smileys/smiley1.gif" border="0" align="middle" />) to track the head movements. You can use models like Active Appearance Models, or Dementhon's POSIT (because the head is an undeformable solid regarding the problem interesting us here) to follow the head.
When I worked with that 5 years ago in my lab, and using OpenCV (Matlab is slow as a turtle for that), it wasn't possible to do that realtime.
I remember the setup : a USB webcam (the ToUcam II at that time the only one able to do 60 fps), and a second one with the IR filter removed.
In the end, we could only do the tracking on videos, because no computer could do realtime what we wanted to do : two webcam streaming at 60 fps, you need to do the AAM on one of the image, blob tracking on the other and mouse pointer interpolation (because remember that the head isn't moving all the time, and the eye, even moving, wasn't covering the whole image, so we needed to create a relationship between a displacement on a 320x240 or 640x480 image from the eye and the whole screen image, at 1650 or something - each 1 pixel evaluation error from the webcam cost us a x3 error on the cursor position ! And since the eye is jittering all the time, because of the saccades, you can imagine the performances where terrible !
Furthermore, the USB2 protocol is maxed out by images at 640x480, at 60 fps - remember that you have the overhead of the protocol on top of that, and DMA was good enough to do that for two cam in parallel...)
With Tobii, all that is done on the chip, so it can be done fast. But they are two problems, in the end : first, we don't really know what are the health effects of shining IR lights 8 hours a day or more on the retina of a geek (my wife is an optometrist, and she did biblio work on that). Second, they are 3 interactions with the mouse pointer : move, designate, interact. You can do the move part with the eyetracker, you can designate by leaving the cursor on the target, but how do you interact (ie. send a command to the computer) ? You need another channel (and no, the eyebrow isn't going to make it, even by frowning, or flapping the eyelid. Too cumbersome. At the end of the day, you are going to die with two heavy, muscular eyelids <img src="smileys/smiley17.gif" border="0" align="middle" />).
To make an efficient iris-recognition, you need to do an high-res picture of the iris, warp the image from polar coordinates to cartesian (to make the comparison possible against a database). That's why it only works with really good cam (CSI only works on TV, you know. And by the way, who in Hollywood thinks you can go on crime scene in a suit, and that a blonde legist can search evidences on a body with their hair unnattached - DNA everywhere, anyone ? <img src="smileys/smiley4.gif" border="0" align="middle" />)