Video concept: iPod Touch with 3D screen and orientation according to the user's position?

[youtube]http://www.youtube.com/watch?v=2gm7u6f2glE[/youtube]

A website Japanese launched a rather interesting rumor regarding the next generation of the iPod Touch terminal. According to the statements made by them, Apple intends to produce an iPod Touch terminal with a 3D screen (which does not require glasses) and a system that will allow the orientation of the screen image according to the user's positioning. Apple has filed various patent applications for inventions related to 3D technology, but the Japanese claim that Apple could implement a 3D screen similar to the one available in the future Nintendo 3DS. The information about the implementation of 3D technology in the iPod Touch would come from sources close to the component manufacturers for Apple.

In addition to 3D technology, Apple would also implement a image orientation system screen depending on the user's position. This system would use the front camera, the gyroscope and the accelerometer to orient the display images according to the position from which the user is looking. To be honest, this technology seems much more interesting than the 3D one and would probably find its implementation before the 3D one.

An electronic device for providing a display that changes based on the user's perspective is provided. The electronic device may include a sensing mechanism operative to detect the user's position relative to a display of the electronic device. For example, the electronic device may include a camera operative to detect the position of the user's head. Using the detected position, the electronic device may be operative to transform displayed objects such that the displayed perspective reflects the detected position of the user. The electronic device may use any suitable approach for modifying a displayed object, including for example a parallax transform or a perspective transform. In some embodiments, the electronic device may overlay the environment detected by the sensing mechanism (eg, by a camera) to provide a more realistic experience for the user (eg, display a reflection of the image detected by the camera on reflective surfaces of a displayed object).

What do you think, which of these 2 technologies could be implemented first?