The perception that we have of our world depends on the task we are currently performing in the environment, so if we are driving a car we will pay attention to the objects that are visually important to the task we are performing such as, the road, road signs, other vehicles, etc. The same is true when we explore virtual environments. The creation of high-fidelity 3D maps on mobile devices to aid navigation in urban environments is computationally very expensive, precluding achieving this quality at interactive rates. In this paper we present a case study to show how the human visual system may be exploited, when viewers are undertaking a task, to reduce the overall quality of the displayed image, without the users being aware of this reduction in quality. The displayed images are selectively rendered with the key features used to identify location and orientation in a 3D urban environment produced in high quality and the remainder of the image in low quality.
Type (Professor's evaluation):
No. of pages: