Posts
Showing posts from February 12, 2017
- Get link
- X
- Other Apps
Flexible OLED display If touchscreens on smartphones are rigid and still not responsive enough to your commands, then you might probably be first in line to try out flexible OLED (organic light-emitting diode) displays. The OLED is an organic semiconductor which can still display light even when rolled or stretched. Stick it on a plastic bendable substrate and you have a brand new and less rigid smartphone screen. Furthermore, these new screens can be twisted , bent or folded to interact with the computing system within. Bend the phone to zoom in and out, twist a corner to turn the volume up, twist the other corner to turn it down, twist both sides to scroll through photos and more. Such flexible UI enables us to naturally interact with the smartphone even when our hands are too preoccupied to use touchscreen . This could well be the answer to the sensitivity (or lack there of) of smartphone screens towards gloved fingers or when fingers are ...
- Get link
- X
- Other Apps
Brain-Computer Interface Our brain generates all kinds of electrical signals with our thoughts, so much so that each specific thought has its own brainwave pattern. These unique electrical signals can be mapped to carry out specific commands so that thinking the thought can actually carry out the set command. In a EPOC neuroheadset created by Tan Le, the co-founder and president of Emotiv Lifescience, users have to don a futuristic headset that detects their brainwaves generated by their thoughts . As you can see from this demo video , the command executed by thought is pretty primitive (i.e. pulling the cube towards the user) and yet the detection seems to be facing some difficulties. It looks like this UI may take awhile to be adequately developed. In any case, envision a (distant) future where one could operate computer systems with thoughts alone . From the concept of a ‘smart home’ where one could turn lights on or off without having to step out of ...
Gesture Interfaces
- Get link
- X
- Other Apps
Gesture Interfaces The 2002 sci-fi movie , Minority Report portrayed a future where interactions with computer systems are primarily through the use of gestures. Wearing a pair of futuristic gloves, Tom Cruise, the protagonist, is seen performing various gestures with his hands to manipulate images, videos, datasheets on his computer system. A decade ago, it might seem a little far-fetched to have such a user-interface where spatial motions are detected so seamlessly. Today, with the advent of motion-sensing devices like Wii Remote in 2006, Kinect and PlayStation Move in 2010, user interfaces of the future might just be heading in that direction. In gesture recognition, the input comes in the form of hand or any other bodily motion to perform computing tasks, which to date are still input via device, touch screen or voice. The addition of the z-axis to our existing two-dimensional UI will undoubtedly improve the human-computer interaction experien...
- Get link
- X
- Other Apps
Application Sandboxing Application sandboxing, is an approach to software development and mobile application management ( MAM ) that limits the environments in which certain code can execute. The goal of sandboxing is to improve security by isolating an application to prevent outside malware , intruders, system resources or other applications from interacting with the protected app . The term sandboxing comes from the idea of a child's sandbox, in which the sand and toys are kept inside a small container or walled area. Developers that don't want an application to be touched by outside influences can wrap security policies around an app . Application sandboxing is controversial because its complexity can cause more security problems than the sandbox was originally designed to prevent. For example, if a developer builds an application that needs to interact with a device's contacts list, sandboxing would cause that application to lose imp...
- Get link
- X
- Other Apps
Google Driverless Car I could still remember the day I watch the iRobot as a teen, and being skeptical about my brother’s statement that one day, the driverless car will become reality. And it’s now a reality, made possible by… a search engine company, Google . While the data source is still a secret recipe, the Google driverless car is powered by artificial intelligence that utilizes the input from the video cameras inside the car, a sensor on the vehicle’s top, and some radar and position sensors attached to different positions of the car. Sounds like a lot of effort to mimic the human intelligence in a car, but so far the system has successfully driven 1609 kilometres without human commands! "You can count on one hand the number of years it will take before ordinary people can experience this." Google co-founder, Sergey Brin said. However, innovation is an achievement, consumerization is the headache, as Google currently face the challenge to ...
- Get link
- X
- Other Apps
Parallella Parallella is going to change the way that computers are made, and Adapteva offers you chance to join in on this revolution. Simply put, it’s a supercomputer for everyone. Basically, an energy-efficient computer built for processing complex software simultaneously and effectively. Real-time object tracking, holographic heads-up display, speech recognition will become even stronger and smarter with Parallella. The project has been successfully funded so far, with an estimated delivery date of February 2013. For a mini supercomputer, the price seems really promising since it’s magically $99! It’s not recommended for the non-programmer and non-Linux user, but the kit is loaded with development software to create your personal projects. I never thought the future of computing could be kick-started with just $99, which is made possible using crowdfunding platforms .