Guidelines for design of user interface software in six functional areas: data entry, data display, sequence control, user guidance, data transmission, and data. As a noun, an interface is either: A user interface, consisting of the set of dials, knobs, operating system commands, graphical display formats, and other devices. It determines how commands are given to the computer or the program and how information is. Graphical user interface (GUI): user gives commands by selecting and.
Next- Generation User Interface That Are (Almost) Here. When we talk about user interface (UI) in computing, we’re referring to how a computer program or system represents itself to its user, usually via graphics, text and sound. We’re all familiar with the typical Windows and Apple operating system where we interact with icons on our desktop with our mouse cursors. Prior to that, we had the old- school text- based command- line prompt. The shift from text to graphics was a major leap initiated by founder of Apple, Steve.
Jobs, with his hallmark Macintosh operating system in 1. In recent years, we’ve also witnessed innovated UI that involved the use of touch (e. Siri) and even gestures (e.
They’re, however, pretty much in their primary stages of development. Nevertheless, they give us a clue as to how the next revolution of UI may be. Here are 8 key features of what next- generation UI may going to be like: Recommended Reading: Cool Futuristic/Concept Gadgets That Really Inspire. Gesture Interfaces. The 2. 00. 2 sci- fi movie, Minority Report portrayed a future where interactions with computer systems are primarily through the use of gestures. Wearing a pair of futuristic gloves, Tom Cruise, the protagonist, is seen performing various gestures with his hands to manipulate images, videos, datasheets on his computer system. A decade ago, it might seem a little far- fetched to have such a user- interface where spatial motions are detected so seamlessly.
Today, with the advent of motion- sensing devices like Wii Remote in 2. Kinect and Play. Station Move in 2. In gesture recognition, the input comes in the form of hand or any other bodily motion to perform computing tasks, which to date are still input via device, touch screen or voice. The addition of the z- axis to our existing two- dimensional UI will undoubtedly improve the human- computer interaction experience. Just imagine how many more functions can be mapped to our body movements.
Well, here’s a demo video of g- speak, a prototype of the computer interface seen in Minority Report, designed by John Underkoffler who was actually the film’s science advisor. Watch how he navigate through thousands of photos in a 3. D- plane through his hand gestures and collaborate with fellow . Underkoffler believes that such UI will be commercially available within the next five years. Brain- Computer Interface. Our brain generates all kinds of electrical signals with our thoughts, so much so that each specific thought has its own brainwave pattern.
These unique electrical signals can be mapped to carry out specific commands so that thinking the thought can actually carry out the set command. In a EPOC neuroheadset created by Tan Le, the co- founder and president of Emotiv Lifescience, users have to don a futuristic headset that detects their brainwaves generated by their thoughts.
As you can see from this demo video, the command executed by thought is pretty primitive (i. It looks like this UI may take awhile to be adequately developed. In any case, envision a (distant) future where one could operate computer systems with thoughts alone. From the concept of a . Flexible OLED display. If touchscreens on smartphones are rigid and still not responsive enough to your commands, then you might probably be first in line to try out flexible OLED (organic light- emitting diode) displays. The OLED is an organic semiconductor which can still display light even when rolled or stretched.
Stick it on a plastic bendable substrate and you have a brand new and less rigid smartphone screen.(Image credit: meharris)Furthermore, these new screens can be twisted, bent or folded to interact with the computing system within. Bend the phone to zoom in and out, twist a corner to turn the volume up, twist the other corner to turn it down, twist both sides to scroll through photos and more. Such flexible UI enables us to naturally interact with the smartphone even when our hands are too preoccupied to use touchscreen.
This could well be the answer to the sensitivity (or lack there of) of smartphone screens towards gloved fingers or when fingers are too big to reach the right buttons. With this UI, all you need to do is squeeze the phone with your palm to pick up a call. Augmented Reality (AR)We are already experiencing AR on some of our smartphone apps like Wikitude and Drodishooting, but they are pretty much at their elementary stages of development. AR is getting the biggest boost in awareness via the upcoming Google’s Project Glass, a pair of wearable eyeglasses that allows one to see virtual extensions of reality that you can interact with. Here’s an awesome demo of what to expect. AR can be on anything other than glasses, so long as the device is able to interact with a real- world environment in real- time.
Picture a piece of see- through device which you can hold over objects, buildings and your surroundings to give you useful information. For example, when you come across a foreign signboard, you can look through the glass device to see them translated for your easy reading. AR can also make use of your natural environment to create mobile user interfaces where you can interact with by projecting displays onto walls and even your own hands.
Check out how it is done with Sixth. Sense, a prototype of a wearable gestural interface developed by MIT that utilizes AR. Voice User Interface (VUI)Since the . The most recent hype over VUI has got to be Siri, a personal assistant application which is incorporated into Apple’s i. OS. It uses a natural language user interface for its voice recognition function to perform tasks exclusively on Apple devices. However you also see it as the supporting act in other user interface technologies like Google Glass itself.
Glass works basically like a smartphone, only you don’t have to hold it up and interact with it with your fingers. Instead it clings to you as eyewear and receives your commands via voice control. The only thing that is lacking now in VUI is the reliability of recognizing what you say. Perfect that and it will be incorporated into user interfaces of the future.
At the rate that smartphones capabilities are expanding and developing now, it’s just a matter of time before VUI takes centre stage as the primary form of human- computer interaction for any computing system. Tangible User Interface (TUI)Imagine having a computer system that fuses the physical environment with the digital realm to enable the recognition of real world objects. In Microsoft Pixelsense (formerly known as Surface), the interactive computing surface can recognize and identify objects that are placed onto the screen. In Microsoft Surface 1. This allows the system to capture and react to the items placed on the screen.(Image credit: ergonomidesign)In an advanced version of the technology (Samsung SUR4.
Microsoft Pixel. Sense), the screen includes sensors, instead of cameras to detect what touches the screen. On this surface, you could create digital paintings with paintbrushes based on the input by the actual brushtip. The system is also programmed to recognize sizes and shapes and to interact with embedded tags e.
Smartphones placed on the surfaces could trigger the system to display the images in the phone’s gallery onto the screen seamlessly. Wearable Computer. As the name suggests, wearable computers are electronic devices which you can wear on you like an accessory or apparel. It can be a pair of gloves, eyeglasses, a watch or even a suit. The key feature of wearable UI is that it should keep your hands free and will not hinder your daily activities. In other words, it will serve as a secondary activity for you, as and when you wish to access it.(Image Source: sonymobile.
Think of it as having a watch that can work like a smartphone. Sony has already released an Android- powered Smart. Watch earlier this year that can be paired with your Android phone via Bluetooth.
It can provide notifications of new emails and tweets. As with all smartphones, you can download compatible apps into Sony Smart. Watch for easy accessibility. Expect more wearable UI in the near future as microchips bearing smart capabilities grow nano- smaller and be fitted into everyday wear. Sensor Network User Interface (SNUI)Here’s an example of a fluid UI where you have multiple compact tiles made up of color LCD screens, in- built accelerometers and Ir. DA infrared transceivers that are able to interact with one another when placed in close proximity.
Let’s make this simple. It’s like Scrabble tiles that have screens which will change to reflect data when placed next to each other.(Image credit: nordicsemi)As you shall see in this demo video of Siftables, users can physically interact with the tiles by tilting, shaking, lifting and bumping it with other similar tiles. These tiles can serve as a highly interactive learning tool for young children who can receive immediate reactions to their actions. SNUI is also great for simple puzzle games where gameplay includes shifting and rotating tiles to win. Then there’s also the ability to sort images physically by grouping these tiles together according to your preferences. It is a more crowd- enabled TUI; instead of one screen it’s made out of several smaller screens that interact with one another. Most Highly- Anticipated UI?
As these UI become more intuitive and natural for the new generation of users, we are treated with a more immersive computing experience that will continually test our ability to digest the flood of knowledge they have to share. It will be overwhelming and, at times, exciting and it’s definitely something to look forward to in new technologies to come. More! Interested to see what the future has in store for us? Check out the links below. Which of these awesome UI are you most excited about? Or do you have any other ideas for UI of the next- gen?
Share your thoughts here in the comments.
Patent US8. 02. 37. Apparatus, method, computer program and user interface for enabling access .. FIELD OF THE INVENTIONEmbodiments of the present invention relate to an apparatus, method, computer program and user interface for enabling access to functions. In particular, they relate to an apparatus, method, computer program and user interface for enabling access to functions in response to a fingerprint input. BACKGROUND TO THE INVENTIONMany electronic apparatus have a large number of functions and are capable of storing many different types of information. For example a mobile cellular phone typically has several different communications functions including making a telephone call, SMS messages, MMS messages, Bluetooth messaging and internet access, as well as other functions such as camera functions, a music player and calendar functions.
The phone is also able to store information relating to each of these functions for example, contact information, received and sent messages, digital images captured by the camera and audio files. Typically a user can access the various functions of the apparatus via a menu or a list of options. However as the number of functions of the apparatus increases and more information is stored on the device the number of options stored in the menu also increases which makes the menu more complicated and laborious to navigate. Also if a user wishes to access more than one function or piece of information to complete a task they may need to navigate through more than one menu. However the number of shortcuts which can be provided is limited by the user input device of the apparatus. It would be beneficial to provide an apparatus which enables a user to quickly access a large range of functions of the apparatus. BRIEF DESCRIPTION OF THE INVENTIONAccording to one embodiment of the invention there is provided an apparatus comprising: a memory for storing information associating a fingerprint input with a function of the apparatus; a user input device comprising a device for detecting a multi- fingerprint input, the multi- fingerprint input comprising a plurality of fingerprints where each fingerprint is associated with a different function of the apparatus; and a processor configured to identify the plurality of fingerprints within the multi- fingerprint input and, in response to the identification of the plurality of fingerprints, determine a function associated with the multi- fingerprint input and enable access to that function, wherein the function associated with the multi- fingerprint input is a combination of the functions associated with the plurality of fingerprints within the multi- fingerprint input.
This provides the advantage that a user can quickly access a large number of functions using multi- fingerprint inputs. As there are a large number of combinations of multi- fingerprint inputs which may be made there are a large number of possible shortcuts to functions which may be provided. Also as the multi- fingerprint input enables access to a function which is a combination of the functions associated with each of the fingerprints within the multi- fingerprint input this enables access to more specific functions and may enable a user to avoid having to navigate through multiple menus to complete a task. As the function associated with the multi- fingerprint input is a combination of the functions associated with each of the individual fingerprints within the multi- fingerprint input this also makes the apparatus more intuitive for the user to use as they do not need to remember which functions are associated with every possible multi- fingerprint input because it can be easily deduced from the functions associated with each of their fingerprints.
Furthermore, as fingerprint information is used to access the functions of the apparatus and this information is unique to the user of the apparatus this provides an added level of security to the apparatus. In some embodiments of the invention the user input device may enable a user to program the apparatus by assigning functions to each of their fingerprints.
This provides the advantage that it enables a user to personalize the apparatus so that the multi- fingerprint inputs provide access to the functions which they use most often and that the multi- fingerprint inputs are the most intuitive inputs for the user. In some embodiments at least one fingerprint within the multi- fingerprint input is associated with an application function and at least one fingerprint is associated with a parameter function. An application function is a function associated with a particular application of the apparatus such as a communications function or internet browsing. An application function may be a general application, for example all messaging functions, or a subset of functions within a general application, for example SMS messaging.
A parameter function is an item or items of information which may be used to implement an application function. For example a parameter function may be contact information such as phone numbers or URL addresses which can be used to send a message or access a website. According to another embodiment of the invention there is provided a method comprising: detecting a multi- fingerprint input comprising a plurality of fingerprints where each individual fingerprint is associated with a different function of an apparatus; identifying the plurality of fingerprints within the multi- fingerprint input; determining, in response to the identification of the plurality of fingerprints, a function associated with the multi- fingerprint input where the function associated with the multi- fingerprint input is a combination of the functions associated with the plurality of fingerprints within the multi- fingerprint input; and enabling access to the function associated with the multi- fingerprint input. According to another embodiment of the invention there is provided a computer program comprising program instructions for controlling an apparatus, the apparatus comprising, a memory for storing information associating a fingerprint input with a function of the apparatus, and a user input device comprising a device for detecting a multi- fingerprint input, the program instructions providing, when loaded into a processor: means for detecting a multi- fingerprint input comprising a plurality of fingerprints where each individual fingerprint is associated with a different function of an apparatus; means for identifying the plurality of fingerprints within the multi- fingerprint input; means for determining, in response to the identification of the plurality of fingerprints, a function associated with the multi- fingerprint input where the function is a combination of the functions associated with the plurality of fingerprints within the multi- fingerprint input; and means for enabling access to the function associated with the multi- fingerprint input. According to another embodiment of the invention there is provided a user interface comprising: a device for detecting a multi- fingerprint input, the multi- fingerprint input comprising a plurality of fingerprints where each fingerprint is associated with a different function of the apparatus; wherein, the user interface is configured, in response to the detection of the multi- fingerprint input, to enable access to a function associated with the multi- fingerprint input and wherein the function associated with the multi- fingerprint input is a combination of the functions associated with the plurality of fingerprints within the multi- fingerprint input. The apparatus may be for wireless communication, accessing the internet, viewing mobile television or for storing information such as digital images or audio files etc. The functions of the apparatus may be accessible via fingerprint inputs.
BRIEF DESCRIPTION OF THE DRAWINGSFor a better understanding of the present invention reference will now be made by way of example only to the accompanying drawings in which: FIG. FIG. 3. A to 3. E illustrate a first method of associating a function of the apparatus with a fingerprint according to an embodiment the present invention; FIGS. A to 4. E illustrate a second method of associating a function of the apparatus with a fingerprint according to a second embodiment of the present invention; FIGS.
A and 5. B illustrate a method of making a multi- fingerprint input according to an embodiment of the present invention; and. FIGS. 6. A and 6. B illustrate a second method of making a multi- fingerprint input according to an embodiment of the invention. DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTIONThe Figures illustrate an apparatus 1 comprising: a memory 5 for storing information 1.
FIG. 1 schematically illustrates an electronic apparatus 1. Only the features referred to in the following description are illustrated.
It should, however, be understood that the apparatus 1 may comprise additional features that are not illustrated. The electronic apparatus 1 may be, for example, a mobile cellular telephone, a personal computer, a personal digital assistant, a digital camera, a personal music player or any other electronic apparatus that enables a user to make fingerprint inputs to control the apparatus.
The illustrated electronic apparatus 1 comprises: a user interface 1. The processor 3 is connected to receive input commands from the user input device 1. The processor 3 is also connected to write to and read from the memory 5. The user interface 1.
The user input device 1. The device 1. 8 may be operable to detect fingerprint inputs comprising single fingerprints and also multi- fingerprint inputs comprising a plurality of fingerprints. The device 1. 8 for detecting fingerprint inputs may comprise, for example, a touch sensitive area of the display 1. The fingerprint information may then be stored in the memory 5. The user input device 1. For example, the user input device 1.