Friday, December 15, 2017

LiDAR News: Tetravue, Diabotics

IEEE Spectrum publishes an article "TetraVue Says Its Lidar Will Dominate the Robocar Business." The reason for domination is said to be the high spatial resolution - 2MP in the current Tetravue design:

“We put an optical encoder between the lens and the image sensor, and it puts a time stamp on photons as they come in, so we can extract range information,” says Hal Zarem, chief executive of TetraVue.

That optical method has the advantage of scalability, which is why TetraVue’s system boasts 2 megapixels. And because the 100-nanosecond-long flashes repeat at a rate of 30 hertz, the lidar provides 60 million bits of data per second. That’s high-definition, full motion video.

“Because you get standard video as well as lidar for each pixel, you don’t have to figure which object the photon came from—it’s inherently fused in the camera,” says Zarem.

No other lidars will be needed, he adds. Translation: Say goodbye to all the other lidar companies you’ve heard about—Velodyne, for example. As for the other sensors, well, radars will survive, as will a few cameras to fill secondary roles such as showing what’s behind the car when you back up.
"

Tetravue official PR is here. The Tetravue LiDAR operation is explained here. TrafficTechnologyToday publishes a couple of Tetravue slides:


BusinessWire: Diabotics ports its LiDAR image processing software to Renesas R-Car platform:

"LiDAR processing today requires an efficient processing platform and advanced embedded software. By combining Renesas’ high-performance image processing, low-power automotive R-Car system-on-chip (SoC) with Dibotics’ 3D simultaneous localization and mapping (SLAM) technology, the companies deliver a SLAM on Chip™ (Note 1). The SLAM on Chip implements 3D SLAM processing on a SoC, a function that used to require a high-performance PC. It also realizes 3D mapping with LiDAR data only, eliminating the need to use inertial measurement units (IMUs) and global positioning system (GPS) data. The collaboration enables a real-time 3D mapping system with low power consumption and high-level functional safety in automotive systems.

Unlike existing approaches, Dibotics’ Augmented LiDAR™ software realizes 3D SLAM technology that only requires data from the LiDAR sensor to achieve 3D mapping. It does not require additional input from IMUs, GPS, or wheel encoders, which eliminates extra integration efforts, lowers bill-of-material (BOM) costs and simplifies development. In addition, the software realizes point-wise classification (Note 3), detection and tracking of shape, speed, and trajectory of moving objects, and Multi-LiDAR fusion.
"


Meanwhile, Velodyne publishes a visionary article "Six Gifts LiDAR Can Give to the World" mostly prizing the company products. And Panasonic presents a self-driving LiDAR-powered fridge, as shown in Tech Insider video:


EETimes Interviews ams CEO

EETimes Junko Yoshida publishes her talk with Ams CEO Alexander Everke talking about the company's new focus on sensing and 3D imaging. Few quotes:

"Ams is focused on acquiring technologies, not the revenue.

Everke is enthusiastic about Ams’ 3D adventure. He called 3D sensing “one of the mega trends of our industry that will drive the market over the next 10 years.” In smartphones, industry 4.0, automotive and emerging medical applications, the imaging world is rapidly transitioning from capturing 2D information to 3D, said the Ams CEO.

With Heptagon, Ams is adding ToF sensors. Ams’ Heptagon acquisition is considered pivotal for the company’s future growth. Heptagon assets are helping to turn Ams into “a very interesting wafer level optical packing company.


Thursday, December 14, 2017

Research In China View on the Industry

ResearchInChina: Global CCM market was worth USD16.611b in 2015, a year-on-year rise of 3.8% from 2014, the slowest rate since 2010. The market fell modestly in 2016 due to a drop in shipments of Apple phones that carry CCM with the highest unit price. The market experienced a big rebound in 2017 driven by dual camera, growing by 4.3% to USD17.232b, and is expected to attain USD19.134b in 2021.


CCM is composed of Lens, VCM, IRCF, CIS, DSP and FPC. Among them, CIS, Lens and VCM have the highest value. In the mainstream 13MP camera module, for example, CIS, Lens and VCM make up about 40.6%, 14.3% and 11.3% of total costs, respectively.

CIS: Global CIS market size approximated USD10.516b in 2016, up 5.6% from a year ago, and is expected to grow 4.0% in 2017 and hit USD12.621b in 2021. Sony is an undisputed leader in the market with a market share of about 42% in 2016, followed by Samsung (18%), OmniVision (12%), ON Semi (6%) and Panasonic (3%). The top 3 companies market share was 73% and the top 5 were 82% in 2016. Particularly, almost all 13MP-above products are made by first three vendors, indicating a high market concentration, a trend that is growing.

Optical Lens: Global shipments of lens (front and rear) totaled 3.49 billion pieces in 2016, a year-on-year rise of 7.9%, including 1.64 billion 5P-above lenses, a 19.7% increase from a year ago, far higher than the growth rate of the industry, compared with a continued fall in shipments of 5P-below lens. The world’s shipments of optical lens are expected to reach 3.763 billion pieces in 2021, including 2.728 billion 5P-above lenses, representing a 72.5% market share. Taiwanese LARGAN Precision, a behemoth in the market, shipped 1.15 billion lenses with a market share of 32.9% in 2016. It is expected that, along with hot sales of new-generation iPhone and continuous upgrading of mobile phone lens, LARGAN Precision will seize 34.3% by market share and 16.4% by shipments.

VCM: Global demand for mobile phone VCM was 1.49 billion pieces in 2016 and will climb to 3.2 billion pieces in 2021 at a CAGR of 17.1%, Hundreds of VCM producers are primarily divided into Japanese ones (Alps, Mitsumi Electric, TDK), South Korean ones (Samsung Electric, JAHWA, Hysonic and LG) and Chinese ones (New Shicoh Motor, B.L. Electronics, Hozel Electronics, and Liaoning Zhonglan Electronic Technology). Japanese and South Korean players have advanced technologies and mature processes. As Chinese technology and process for VCM advance, local VCM enterprises, with advantages in price and services, have become more competitive and are expected to break monopoly of Japanese and South Korean counterparts.

Invensas Completes DBI Technology Transfer to DALSA

BusinessWire: Invensas, a subsidiary of Xperi, announces the successful technology transfer of its Direct Bond Interconnect (DBI) to Teledyne DALSA. This capability enables Teledyne DALSA to deliver next-generation image sensors to customers in the automotive, IoT and consumer electronics markets. Invensas and Teledyne DALSA announced the signing of a development license in February 2017.

In partnership with Invensas, we have successfully completed the transfer of its revolutionary DBI technology to our manufacturing facilities in Bromont,” said Edwin Roks, president of Teledyne DALSA. “We are now ready to offer this enabling platform as part of our foundry services to customers, including our own business lines, seeking smaller, higher performance and more reliable MEMS and imaging solutions.

Sunny Optics Officially Licenses ImmerVision Panomorph Lens

BusinessWire: ImmerVision, developer of exclusive and patented panomorph wide-angle imaging technology, announces that Sunny Optics has licensed panomorph lens technology for global production, and will deliver its first small form-factor panomorph high-resolution super-wide-angle lenses for smartphones and mobile devices in Q1 2018.

ON Semi Proposes CIS Technology for Analog Signal Processing

ON Semi patent application US20170350756 "Charge packet signal processing using pinned photodiode devices" by Roger Panicacci proposes using full charge transfer devices for a generic analog signal processing:

"It would... be desirable to provide improved signal processing circuitry without reliance on conventional high performance capacitors that dissipate power to support charge mixing or dissipate power to support switched capacitor circuit topologies.

Embodiments of the present invention relate to signal processing circuitry configured to transfer charge packets having an adjustable size to a circuit node. Adjustable size charge packets may originate at pinned photodiode structures. Adjustable size charge packets may be transferred to circuit nodes that provide a reference voltage for a comparator in a signal processing circuit such as an ADC.
"

Examples include 1st and 2nd order sigma-delta loops below:

Wednesday, December 13, 2017

SensL 100m Ranging Demo

SensL demos a 100m range detection with its SiPM sensor:

PRNU ID

University at Buffalo paper "ABC: Enabling Smartphone Authentication with Built-in Camera" by Kui Ren, Zhongjie Ba, Sixu Piao, Dimitrios Koutsonikolas, Aziz Mohaisen, and Xinwen Fu proposes to use PRNU to securely identify a smartphone:

"First observed in conventional digital cameras, PRNU analysis is common in digital forensic science. For example, it can help settle copyright lawsuits involving photographs.

But it hasn’t been applied to cybersecurity — despite the ubiquity of smartphones — because extracting it had required analyzing 50 photos taken by a camera, and experts though that customers wouldn’t be willing to supply that many photos. Plus, savvy cybercriminals can fake the pattern by analyzing images taken with a smartphone that victims post on unsecured websites.

The study addresses how each of these challenges can be overcome.

Compared to a conventional digital camera, the image sensor of a smartphone is much smaller. The reduction amplifies the pixels’ dimensional non-uniformity and generates a much stronger PRNU. As a result, it’s possible to match a photo to a smartphone camera using one photo instead of the 50 normally required for digital forensics.

When a customer initiates a transaction, the retailer asks the customer (likely through an app) to photograph two QR codes (a type of barcode that contains information about the transaction) presented on an ATM, cash register or other screen.

Using the app, the customer then sends the photograph back to the retailer, which scans the picture to measure the smartphone’s PRNU. The retailer can detect a forgery because the PRNU of the attacker’s camera will alter the PRNU component of the photograph.
"

Sony Sensors Compared

Basler publishes a white paper "Sensor Comparison: Are all IMXs equal?" by Dominik Lappenk├╝per. Some food for thought from the paper:

"The first generation of this new [Pregius] sensor series includes the sensors IMX174 and IMX249. They have a pixel pitch of 5.86 um. In the first generation of the Pregius sensors, a particularly notable feature is the very high saturation capacity of over 32 ke-.

With the second generation of the Pregius series, Sony established a smaller pixel at 3.45 um. Due to the smaller pixels in the sensors of the 2nd generation, their saturation capacity greatly decreases, which results in values that are more typical for the CMOS sensors.
"

Add caption

"EMVA1288 standard offers the measured value of the “absolute threshold value for sensitivity”. It states the average number of required photons so that the signal to noise ratio is exactly 1."

LiDAR News: Aeye, Ouster, Innovusion

BusinessWire: AEye (former US LADAR) introduces iDAR (Intelligent Detection and Ranging) that combines the world’s first agile MOEMS LiDAR, pre-fused with a low-light camera and embedded artificial intelligence - creating software-definable and extensible hardware that can dynamically adapt to real-time demands.

AEye’s iDAR is designed to intelligently prioritize and interrogate co-located pixels (2D) and voxels (3D) within a frame, enabling the system to target and identify objects within a scene 10-20x more effectively than LiDAR-only products. Additionally, iDAR is capable of overlaying 2D images on 3D point clouds for the creation of True Color LiDAR. The introduction of iDAR follows AEye’s September demonstration of the first 360 degree, vehicle-mounted, solid-state LiDAR system with ranges up to 300 meters at high resolution.

AEye’s unique architecture has allowed us to address many of the fundamental limitations of first generation spinning or raster scanning LiDAR technologies,” said Luis Dussan, AEye founder and CEO. “These first generation systems silo sensors and use rigid asymmetrical data collection that either oversample or undersample information. This dynamic exposes an inherent tradeoff between density and latency in legacy sensors, which restricts or eliminates the ability to do intelligent sensing. For example, while traditional 64 line systems can hit an object once per frame (every 100ms or so), we can, with intelligent sensing, selectively revisit any chosen object twice within 30 microseconds - an improvement of 3000X. This embedded intelligence optimizes data collection, so we can transfer less data while delivering better quality, more relevant content.

AEye’s iDAR system uses proprietary low-cost, solid-state beam-steering 1550nm MOEMS-based LiDAR, computer vision, and embedded artificial intelligence to enable dynamic control of every co-located pixel and voxel in each frame within rapidly changing scenes.


MIT Technology Review adds: "there’s one point that there is no getting around: AEye’s device only has a 70° field of view, which means that a car would need five or six of the sensors dotted around it for a full 360 degrees. And that raises a killer question: how much will it cost? Dussan won’t commit to a number, but he makes it clear that this is supposed to be a high-end option—not competing with hundred-dollar solid state devices, but challenging high-resolution devices like those made by Velodyne. For a full set of sensors around the car, he says, “if you compare true apples-to-apples, we’re going to be the lowest-cost system around.

PRNewswire: Ouster emerges from stealth mode and announces the launch of OS1 model LIDAR, and $27 million in Series A funding, led by Cox Enterprises. The 64-channel OS1 Lidar has begun shipping to customers, and is rapidly ramping up commercial-scale production at a price point approximately 85% below that of its competition (Costs $12,000, according to Ouster site).

The company was founded by CEO Angus Pacala, co-founder and former head of engineering of Quanergy, and CTO Mark Frichtl, a prior Quanergy Systems engineer who also spent time at Palantir Technologies, First Solar, and the Apple Special Projects Group. Pacala and Frichtl formed Ouster with the vision to create a high-performance, reliable, and small form factor LIDAR sensor that could be manufactured at a scale and price that would allow autonomous technology to continue its rapid expansion without the cost and manufacturing constraints currently present in the market.

"The company has maintained a low-profile for over two years – staying heads-down and focusing on getting the OS1 ready to ship," noted CEO and co-founder Angus Pacala. "I'm incredibly proud of our team for their hard work to produce the most advanced, practical, and scalable LIDAR sensor on the market, and we're very excited about the impact our product will have in autonomous vehicles and other applications in robotics," Pacala added.

Ouster's Series A funding will primarily be used for the manufacturing and the continued development of its next sensor designs. The company anticipates manufacturing capacity in the tens of thousands of sensors in 2018 based on its current product design, and Ouster's product roadmap supports continually improving resolution, range, and alternative form factors. Additionally, the new capital will support the company's expansion from approximately 40 employees today to 100 employees by summer 2018.


PRNewswire: Innovusion too comes out of the stealth mode with a Lidar featuring:
  • Resolution: provides near picture quality with over 300 lines of resolution and several hundred pixels in both the vertical and horizontal dimensions.
  • Range: detects both light and dark objects at distances up to 150 meters away which allows cars to react and make decisions at freeway speeds and during complex driving situations.
  • Sensor fusion: fuses LiDAR raw data with camera video in the hardware layer which dramatically reduces latency, increases computing efficiency and creates a superior sensor experience.
  • Accessibility: enables a compact design which allows for easy and flexible integration without impairing vehicle aerodynamics. Innovusion's products leverage components available from mature supply chain partners, enabling fast time-to-market, affordable pricing and mass production.

Chinese-language sites Ifeng and Kuwankeji publish more infor about Innovusion product:

"Innovusion also achieved a very important function, is the fusion of laser radar point cloud and camera video data at the hardware level, greatly reducing the sensor data fusion software processing time.

The company founders used to work for Velodyne and Baidu.

For the reflectivity of 10% of objects, their detectable distance of 100 meters or more.

There are still mechanical components in the product nowadays, which can be called hybrid solid-state radar. Innovusion's team considered design for manufacturing at design stage, and components used in the product are also mature components. In addition, the 1550 nm laser is used in the prototype.
"