Awesome Technology


댓글 남기기

In-display fingerprint sensors are here, but not from Apple or Samsung

The first smartphone with an in-display fingerprint sensor is kinda sorta here, and it’s probably not from whom you’d expect.

It’s not an iPhone, it’s not a Samsung Galaxy device: instead, it’s a Vivo. (In fact, rumors had already predicted that this would be the case last month.)

Image credit: Vivo

The China-based company’s device works through a "Clear ID" optical sensor from Synaptics that’s hidden below the phone’s OLED display. Scanning between the OLED display’s pixels, it effectively does the same job as the old direct-contact fingerprint displays (if a tad more slowly).

It does require that you put your finger in an exact spot in order to work, but fortunately a fingerprint image pops up on the spot when needed so you’re not fumbling blindly across the display.

Only the beginning

Up until now, discussion of Synaptics’ in-display sensor has mainly revolved around Samsung as it’s widely believed that the still-unannounced Samsung Galaxy S9 will have Synaptics’ sensor inside it as well.

And technically, it’s possible that Samsung will still beat Vivo to the punch, as Vivo didn’t actually reveal what smartphone would first include the technology.  There was a display unit on hand at CES 2018 for visitors wanting to try it out for themselves, but it’s not clear if this was the actual phone that will ship with the technology.

In a statement, the company said we’ll see it sometime in the first half of this year, which may give Samsung plenty of time to get its own device out. On the other hand, China’s CNMO site claims we may see it announced as early as tomorrow.

Synaptics claimed last month that it will have around 70 million of its in-display sensors ready to go this year, so it’s possible that Vivo and (possibly) Samsung mark only the first steps in a much wider adoption rate.

  • New year, new tech – check out all our coverage of CES 2018 straight from Las Vegas, the greatest gadget show on Earth!  

from TechRadar – All the latest technology news http://ift.tt/2mjtHy9


댓글 남기기

Nissan’s Brain-to-Vehicle tech lets you control your car with your thoughts

Firefox isn’t just the name of a web browser. It’s also a 1982 movie starring Clint Eastwood based around a fictional Russian fighter jet controlled by the pilot’s thoughts. Someone at Nissan is apparently a fan of that movie.

Nissan’s experimental “Brain-to-Vehicle” (or “B2V” for short) technology allows cars to interpret signals from a driver’s brain. It doesn’t allow drivers to fire missiles with their thoughts, but Nissan believes the technology could help improve future driver-assistance systems and make self-driving cars more human friendly by putting machines and people on the same page.

B2V can read an interpret a person’s brain activity in real time, and feed that information into a car’s various systems. The driver wears a helmet studded with sensors to make that happen. By analyzing brain activity, a car can predict when a human driver is about to take an action, such as turning the steering wheel or tapping the brakes, and respond accordingly.

This essentially allows the car to predict what the driver is going to do, according to Nissan. The automaker claims B2V allows driver-assist systems to make control inputs 0.2-0.5 seconds before a human driver. B2V also allows these systems’ interventions to be less obvious, Nissan says. The goal is for a person to feel like they are driving without any electronic assistance. Many current driver-assist systems can be a bit unpredictable or heavy handed in their responses, so it’s not as if this is a problem that doesn’t need to be addressed.

In fully autonomous cars, B2V could also analyze levels of occupant discomfort and adjust the car’s driving style to give people a more pleasant experience, according to Nissan. The technology could even be used to cue up different augmented-reality displays based on a person’s thoughts, the automaker says. That’s assuming Nissan can perfect the technology, and convince the average person to accept the somewhat-creepy idea of a car monitoring their brain activity.

B2V isn’t anywhere near ready for use in production cars, but Nissan will demonstrate it at CES 2018. The automaker is also pushing ahead with more advanced driver-assist systems and fully autonomous cars, thought-based interface or not.

from Digital Trends http://ift.tt/2Cl9ZYC


댓글 남기기

Microsoft stops selling the Xbox One Kinect adapter

You knew Kinect peripherals weren’t long for this world when Microsoft stopped producing the Kinect in October, but it’s still a sad day. The company has stopped making the Xbox Kinect Adapter that lets Xbox One S, Xbox One X and Windows PC users attach the depth-sensing camera without the presence of the original Xbox One’s proprietary port. Microsoft wants to focus its efforts on "higher fan-requested gaming accessories," a spokesperson told Polygon. In short: there wasn’t exactly rampant demand for an adapter to support a peripheral that had effectively been declared dead.

As it stands, many of those who wanted the adapter already have it. Microsoft gave the adapter away for free for 8 months after the launch of the Xbox One S, and started selling it in April for $40. And if it wasn’t already clear that you had to hurry to get one, major retailers like Amazon and Microsoft itself have listed the adapter as out of stock for months.

This still creates issues for Kinect fans who want to keep using the pioneering device and aren’t content to settle for a headset or a USB webcam. Unless you can keep a first-generation Xbox One hanging around, you’ll either have to find the adapter at a reseller or score a used example on an auction site. And if none of those are options (such as for PC-based Kinect users)… well, you’re stuck. There hasn’t exactly been an abundance of Kinect-ready software to justify the adapter (though one did just arrive a few weeks ago), but this still hurts if you wanted your Kinect sensor to remain relevant for a little while longer.

Source: Polygon

from Engadget http://ift.tt/2Cd2vac


댓글 남기기

Move over, voice: Holograms are the next user interface

During Apple’s fourth-quarter earnings call with analysts, CEO Tim Cook said, “AR is going to change everything.” He wasn’t exaggerating.

Augmented reality (AR) is shaping an entirely new paradigm for mass technology use. We’ve quickly evolved from typing on our PC keyboards, to the point-and-click of the mouse, to the smartphone’s tap or swipe, to simply asking Alexa or Siri to do things for us. Now AR brings us to the age of holographic computing. Along with animojies and Pokémon and face filters, a fresh and futuristic user interface is emerging.

Holographic computing is coming to us now through our phone screens instead of via lasers, which are required for textbook holography. As a result, we’re now seeing a real surge in the use of hologram-like 3D that will completely change how we interact with the world — and each other.

The evidence of this coming shift is everywhere. Apple’s iOS11 puts AR into the hands of over 400 million consumers. The new iPhone X is purposely designed to deliver enhanced AR experiences with 3D cameras and “Bionic” processors. Google recently launched the Poly platform for finding and distributing virtual and augmented reality objects. Amazon has released Sumerian for creating realistic virtual environments in the cloud. We’re also seeing an AR-native content creation movement and a steady stream of AR features coming from Facebook, Snapchat, Instagram, and scads of other tech players.

The captivating user experiences of 3D are obviously attractive for gaming and entertainment, but they’re capable of so much more. And as familiarity with the holographic experience spreads through popular games and filters, this new interface will begin to dominate other functions that are well-suited to its charms.

Already, it’s making inroads in these two areas:

Training – Holography is useful for virtual hands-on guidance to explain a process, complete a form, or orient a user. It also can simulate real-life scenarios such as emergency response, sales interactions, etc. AR enhancements can be overlaid for greater depth and variety in information presentation, such as floating text bubbles to provide detail about a particular physical object, chronological procedure mapping for performing a task, or virtual arrows pointing to the correct button to push on a console.

There’s less need to travel to a classroom if you can launch interactive, immersive 3D presentations on any desk, wall, or floor and “experience” them through the screen in your hand. And unlike standard video, holographic interfaces add an extra experiential element to the training process. As a result, users can more readily contextualize what they are learning.

Customer experience – Consumers are using AR and holographic computing for self-selection, self-service, and self-help. Soon they’ll be using it for even more. For example, IKEA’s AR app lets you point your phone at your dining room to see how a new table will look in the space. Why not just point your phone at the shipping box to be holographically guided through the assembly process when it’s delivered? Holographic computing will also emerge as the preferred means of getting product information and interacting with service agents. Walkthroughs of hotel rooms and vacation destinations with a 3D virtual tour guide/travel planner/salesperson are not too far in the future.

Many other appealing use cases exist, of course. And as this new user interface matures, it will be preferable and will quickly become second nature.

The sheer amount of money being thrown at speedy development by a number of different players shows it’s not yet clear who will define the ultimate holographic interface. Will it remain phone-based? Involve glasses? Shift to desktop? Evolve beyond our current hardware with on-eye projection technology? All of the above? Who can say?

One thing we do know is that significant brain trust is being invested by companies of all types to leverage this emerging technology. And we will see mass adoption and widespread affinity for the holographic interface.

Simon Wright is director of AR and VR at Genesys, maker of omnichannel customer experience and contact center solutions. He is based in Sheffield, England and can be reached at simon.wright@genesys.com.

from VentureBeat http://ift.tt/2BLcXt8


댓글 남기기

LED 링으로 사용자 경험 개선

[EPNC=이나리 기자] 우수한 사용자 경험은 시장에서 긍정적인 평가를 받기 위해서 언제나 중요하다. 청각, 시각, 촉각 기법을 사용하여 스피커, 잠금 장치, 연기 감지기, 웨어러블과 같은 소비자 가전이나 스마트 홈 제품의 사용자 경험을 향상시킬 수 있다. 시각적 기법은 가장 직접적인 방법 중 하나로 제일 손쉽다. 그러한 기법으로는 LED(Light-Emitting Diode) 링을 사용할 수 있다. LED 링을 사용해서 “Breathing”, “Chasing”, “Beating” 같은 화려한 효과들을 낼 수 있다. [그림 1]은

from EPnC News – 전체기사 http://ift.tt/2BoKgSL


댓글 남기기

Is this gesture-controlled steering wheel genius or madness?

It might take a decade or so, but it’s starting to look like self-driving cars are the way of the future. But before we get there, how do you feel about a gesture-controlled steering wheel? It might sound strange, but ZF believes that it has the potential to make cars safer and easier to use.

At first glance, the concept of putting a touchscreen on a steering wheel might seem strange, if not crazy — but it isn’t meant for current cars. Rather, ZF hopes that it will make self-driving cars easier to control. For the foreseeable future, even the most advanced self-driving cars will need a way for human drivers to take control, so we aren’t getting rid of steering wheels anytime soon. However, those cars will also need a way for users to input directions, set destinations, and other tasks. A touchscreen is a natural fit for that sort of thing, given that we already use them on devices every day.

The embedded touchscreen does present its own unique challenges, however. One of the most pressing is the fact that in modern cars, the airbags are stored inside the steering wheel. As a potential workaround, ZF found a way to store the airbag in the rear-rim of the steering wheel. In the event of an accident, the airbag will wrap around the wheel, protecting the driver’s face from the touchscreen.

Juergen Krebs, VP of engineering for ZF, believes that the company’s touchscreen steering wheel may very well be the future of how we interact with our cars.

“ZF’s advanced steering wheel concept represents an important step in the evolution of automated driving while helping to enhance safety and driver awareness,” Krebs said in a statement. “As we prepare for Level 3 automated functions, the hand-over of control between vehicle and driver using highly accurate feedback will be critical. We believe our new concept is the most intuitive and provides the clearest feedback to the driver.”

Those hoping to get a better look at ZF’s futuristic steering wheel can see it for themselves next month at the company’s CES booth in Las Vegas.

from Digital Trends http://ift.tt/2zjmvc4


댓글 남기기

Under-display fingerprint reader arrives on ‘major’ phone in January

Under-the-screen fingerprint readers won’t just be reserved for rough prototypes in the near future. Synaptics has sent word that a "major" smartphone manufacturer in the "top five" will unveil a phone using its Clear ID sensor at CES in January. It’s not offering any clues as to who the mystery early adopter might be, although Vivo was the first to show it off. We wouldn’t be surprised if one of Vivo’s sibling brands (such as Oppo) had the honors, although we certainly wouldn’t rule out competition like Huawei or Xiaomi.

These under-display sensors aren’t flawless, as there tends to be a delay compared to a reader that’s in direct contact with your digits. Synaptics isn’t bothered by that, though — it claims that Clear ID is "twice as fast" as 3D face recognition (i.e. Face ID on the iPhone X) and that it’s more flexible, since you don’t need to be within visual range of your phone.

As it is, the technology might be vital if it’s widely adopted. Now that tall-screened phones are practically de rigeur, phone makers have usually had little choice but to move the reader (typically to the back) or else use another biometric sign-in method. Clear ID theoretically lets phone brands avoid that choice. They can put the reader where it’s most convenient without giving up that all-important eye-catching display.

Source: Synaptics

from Engadget http://ift.tt/2iWKZic


댓글 남기기

Can digital signage find its voice?

COMMENTARY

Can digital signage find its voice?

Dec. 12, 2017 | by

Jeff Hastings

Can digital signage find its voice?

There’s a lot of buzz in the industry about how voice integration in digital signage may be the next big step forward in how signage is used to interface with customers. And while I don’t doubt that voice integration will become much more prominent in the years ahead, as an industry we have some interesting challenges to address.

First and foremost, we need to be realistic about how we intend to use voice recognition to engage viewers. At a technical level, voice recognition requires adequate processing power to recognize and respond to voice commands. To economize the amount of computing power required to interpret and appropriately respond to spoken interaction, it’s important to simplify the interaction. For example, instead of making a particular signage installation capable of responding to complex queries, try to hone in on a very specific vocabulary to trigger some of the most common interactions.

Complex interaction will someday have its place in digital signage, but not before the foundational work is laid. A well-executed installation with rudimentary yet highly functional interaction is far better for the customer than a complicated interaction with a high rate of failure.

Secondly, the absence of pervasive internet connectivity for all signage devices is a challenge in itself. Especially in retail and other disparate environments, it’s simply not feasible to deliver a live internet connection to all devices in the field. Without an internet connection, it’s not possible to deliver an interactive experience based on live database queries in real time.

The solution? Script a handful of very standard interactions that are triggered by common voice commands, yet don’t require a persistent internet connection to complete. This leaves the customer feeling as if they’ve communicated with the signage, when in fact they’ve simply triggered a basic if/then dialogue that’s been carefully scripted and confined to the display and the media player feeding content to the display.

Lastly, as we continue to explore what’s possible with voice-based digital signage, we need to carefully assess what depth of communication is acceptable to general consumers. Someone is likely to interact quite freely with their Amazon Echo or Google Home in the comfort and privacy of their living room; yet people are more prone to a guarded, less emotive form of communication in public settings. For this reason alone, the interactive norms of voice-based digital signage are likely to evolve much differently compared to the evolution of voice-activated smart home devices.

To be clear, voice-activated digital signage is in its infancy and there’s a great deal of room to grow. I don’t expect that conversing with digital signage will become the norm any time soon. But we’re taking small steps in that direction. If managed carefully, we’re going to see a much deeper level of voice-based interaction with digital signage in the years ahead.

Image via Istock.com.


Topics: Customer Experience, Hardware, Software

Companies: BrightSign



Jeff Hastings

BrightSign CEO Jeff Hastings joined BrightSign in August 2009 while it was still a division of Roku Inc. In late 2010 with digital signage activities growing so rapidly, BrightSign became a separate firm. The holder of eight U.S. patents, he also has a history of tech industry leadership, including as president of mp3 pioneer Rio.

www



Related Content


Latest Content

from Latest Media http://ift.tt/2yiv7vS


댓글 남기기

삼성, 손바닥 인식 기술 특허출원

[EPNC=정환용 기자] 스마트폰을 사용할 때 비밀번호를 잃어버린 기억이 한 번쯤은 있을 것이다. 설정해 둔 힌트로 알아내려 하지만 그리 신뢰할 만한 정보는 아니다. 최근 지문인식을 이용한 생체보안 기술이 점점 많이 적용되고 있는데, 삼성전자 역시 생체인식 기술에 대한 연구개발이 한창이다. 삼성은 사용자가 비밀번호를 기억할 필요 없이, 손바닥을 인식해 잠금을 해제하는 기술에 대한 특허를 출원했다. 사람의 손바닥에 있는 고유한 주름을 인식해, 이를 복제할 수 없는 생체정보로 활용하는 것이 삼성의 설명이다. 향후 심도 감

from EPnC News – 전체기사 http://ift.tt/2A1pNTt


댓글 남기기

Microsoft kills off Kinect, stops manufacturing it

Microsoft is finally admitting Kinect is truly dead. After years of debate over whether Kinect is truly dead or not, the software giant has now stopped manufacturing the accessory. Fast Co Design reports that the depth camera and microphone accessory has sold around 35 million units since its debut in November, 2010. Microsoft’s Kinect for Xbox 360 even became the fastest-selling consumer device back in 2011, winning recognition from Guinness World Records at the time.

In the years since its debut on Xbox 360, a community built up around Microsoft’s Kinect. It was popular among hackers looking to create experiences that tracked body movement and sensed depth. Microsoft even tried to bring Kinect even more mainstream with the Xbox One, but the pricing and features failed to live up to expectations. Microsoft was then forced to unbundle Kinect from Xbox One, and produced an unsightly accessory to attach the Kinect to the Xbox One S. After early promise, Kinect picked up a bad name for itself.


Xbox One stock
Xbox One stock

It’s easy to dismiss Kinect as a failed project, but the reality is that the research and hardware has helped Microsoft progress its products elsewhere. HoloLens uses many of the Kinect technologies for depth-sensing, and many laptops now ship with Windows Hello cameras that use the learnings of Kinect to recognize people’s faces. Microsoft is even using some Kinect technology in its latest Windows Mixed Reality headsets.

Apple now looks like the unlikely company to bring Kinect-like technology into the mainstream. The iPhone X, which goes on sale next week, will include depth-sensing cameras to log an owner into the device. The top notch of Apple’s iPhone X is essentially a miniature Kinect, and Apple has used its acquisition of PrimeSense (makers of the original Kinect) to shrink the hardware down to a phone.

from The Verge http://ift.tt/2lf2MWc