Here Come The Jetsons: Flying Cars And The Internet Of Things

Part 3 of the “Future of Transportation and the Internet of Things” series

If you ever watched the cartoon series The Jetsons – or almost any other show set in the space age – you’ll notice that people often get around in personal spacecraft that they themselves drive. Well, the space age is almost here – at least in the form of flying cars. But we won’t be driving them. Instead, like cars, they will be controlled autonomously.

In my last blog, I talked about autonomous vehicles and how much safer they are than self-driven vehicles. To ensure safety in the air, flying cars depend on the same network-connected IoT technology pioneered first in autonomous vehicles on the road.

Is the space age really here?

Let’s first take a quick look at some of the leading organisations out there doing serious work with flying cars.

  • Lilium: A German startup, Lilium tested a full-sized prototype of its flying car in April 2017. The Lilium prototype is entirely electric. It can also take off and land vertically like a helicopter – but then change to forward flight for speeds of up to 300km/h, which is much faster than a helicopter. And it’s quieter than a motorcycle. Lilium has raised $ 100m in two rounds of funding from Tencent, Ev William’s Obvious Ventures, Niklas Zennstrom’s Atomico, among others.
  • EHang: A Chinese company with deep experience building drones, EHang is perhaps the furthest along. The company produces the EHang 184 – a one-passenger flying car that has already undergone 100 successful manned test flights. Reportedly, the city of Dubai is this year launching a pilot program for an autonomous aerial taxi (AAT) service using the EHang 184.
  • Airbus: The aircraft giant, Airbus, has developed CityAirbus, an electric vehicle capable of vertical take-off and landing for up to four passengers. Airbus Vahana aims in the same direction but for is for individual travelers. And let’s not forget the hybrid Airbus Pop.Up concept, this modular air and ground system involves a passenger capsule that can be connected to a propeller system on top for flying or to a wheeled conveyance system for driving on the roads.

Uber – which recently signed an agreement teaming up with NASA around NASA’s Uncrewed Traffic Management (UTM) project developing air traffic control systems for un-crewed aerial systems (flying cars/drones). Even Boeing is making investments in this space.

This is starting to look real.

No network, no flying cars

What all of these ventures have in common is connectedness. Using IoT technology, they’re all controlled remotely – with the vehicle in constant connection to home base along the lines of what is now a reality for autonomous road vehicles like those made by Tesla.

Of course, the networked nature of vehicles (flying or not) has relevance beyond safety. No surprise, then, that Uber is moving forward aggressively with plans to test an on-demand flying cars network by 2020 in the cities of LA, Dubai, and Dallas, and 2023 in Sydney. Here the network provides convenience – coordinating a ride-sharing service in the sky that allows passengers to hook up with flying cars on the fly.

Drones for passengers

Essentially, what we’re moving toward is a future of passenger drones. One obstacle to this reality is the need for keeping batteries charged. Because of battery life issues, for example, the EHang 184 can only travel 23 minutes. The Lilium vehicle, it is claimed, can travel up to an hour – enough to make it from London to Paris. This, and advances in battery power storage capacity will iron out most issues around range.

When we solve this problem – and get over some regulatory hurdles – flying cars will become a lived reality for people in cities everywhere. The benefits will be tremendous, too. Count among these benefits such as less pollution (both air and noise pollution) and less traffic congestion (with flying cars taking another route entirely). And when it comes to emergencies, first responders can be deployed faster and more efficiently than ever before – helping to save lives. And let’s face it, flying cars would just be fun.

Next time I get to Dubai I’ll have to try one out.

To meet the market’s expectations for increasingly fast, responsive, and personalized service, speed of business will be everything. Find out how innovative processes can enable your business to remain successful in this evolving landscape. Learn more and download the IDC paper “Realizing IoT’s Value – Connecting Things to People and Processes.”

Internet of Things – Digitalist Magazine

Here are the reasons behind today’s crazy chip deals

The proposed Broadcom buy of Qualcomm would dwarf the previous year’s chip M&A activity. The data includes announced transactions not closed deals and is complete through Q3 2017. Thanks to IC Insights.

Look what the internet of things has wrought! Monday, Broadcom, which was bought in 2015 by Avago in a $ 37 billion acquisition, said it would spend up to $ 103 billion buying Qualcomm. Let’s not forget that Qualcomm is trying to close a $ 47 billion acquisition of NXP that should happen some time next year. Meanwhile, Intel and AMD have surprisingly decided to team up to rival Nvidia with a new graphics chip.

These partnerships and potential deals are an excellent example of the challenges that chipmakers face as computing and connectivity moves everywhere and becomes more commoditized. These are challenges caused by the growth of the internet of things.

The Broadcom takeover offer is an example of consolidation in several markets (communications, embedded computing and mobile) as prices for these components drop and markets shift. Meanwhile, the Intel deal signals Intel’s acceptance that general purpose compute can’t do everything as computing expands to more devices, and if it wants to succeed it has to embrace other architectures to retains its pricing power.

That’s the big picture, but there’s also the mundane facts of day-to-day life as a chip company driving these deals. Making chips is expensive, both in R&D and then in getting the parts designed and manufactured. As consolidation occurs, companies can combine R&D and business lines across many different companies, creating greater economies of scale. In chip-making and design that scale does matter.

Additionally, more and more companies are designing their own chips, whether it’s Apple in its mobile products or Microsoft for its servers. They do this because they have enough scale, and because the tiny tweaks they can make in silicon can differentiate their hardware or services in ways that leave the competition in the dust. Thus, the original chip vendors are left with a market that isn’t exactly shrinking, but one where if a customer succeeds, might graduate from their products.

Let’s hit the Broadcom takeover offer for Qualcomm first. For the last few years, the average selling prices of many of these chips make by Qualcomm, Broadcom, NXP and others have been heading lower and lower. While companies are selling more of them, they are also selling them at lower cost and at lower margins. This is good for the internet of things because it means adding intelligence into a device becomes cheaper, but it’s a double-edged sword.

Essentially as software started eating the world, the value now accrues to software, while the hardware that makes it possible becomes cheaper and almost interchangeable. That puts pressure on the chipmakers. Additionally, they too are getting more and more into building software to make popping their silicon into existing devices easier. A company like Whirlpool doesn’t want to spent its time designing boards or tweaking protocols. It wants to buy a product that “just works.”

That’s good for the customer and helps the market expand because you don’t have to be a firmware expert to design these chips into your products, but it’s expensive for the chipmakers, many of whom have more software engineers on staff than chip designers.

For Qualcomm there’s another challenge at play. Its efforts to swallow NXP (and CSR in 2014)  were all about getting more chips that fit into automobiles, RFID networks and smart home devices because it was seeing its customer base for smartphone processors stagnate. It was attempting to move from the mobile world deeper into the embedded world — which is what NXP did when it acquired Freescale.

As companies like Apple, Samsung, and now, Google, design their own chips for their phones and devices, Qualcomm’s core application processor business is under threat. That’s why we see it seeking new markets such as drones and robotics that also require a bunch of brains at efficient power consumption.

As part of Broadcom, which also makes application processors, Wi-Fi, Bluetooth and other baseband chips, there’s a huge opportunity to combine communications product lines for servers, mobile and embedded devices. Broadcom, as part of Avago has deep ties in the embedded market through earlier acquisitions of HP’s Agilent and massive ties in the networking world with LSI, PLX Technologies and Emulex.

If we’re looking ahead we can even see that Broadcom buying Qualcomm cements its dominance in embedded and mobile, but it also begins to push it further into servers. Qualcomm is one of several companies trying to use the low-power ARM architecture to build servers that would compete with Intel’s x86 architecture that currently dominates. Qualcomm even has a joint venture in China to build such servers.

Before we get to other other big chip news of the day it’s worth adding that if Broadcom does end up with Qualcomm the big question is what happens to Qualcomm’s patents and licensing business? Activist investors have urged the company to sell the licensing division, which is currently part of of Qualcomm’s fight against Apple. Spinning that out could generate cash to cover the purchase, while giving Broadcom the chip businesses it wants. Whoever buys those patents (Apple has a lot of cash) could build their own networking chips for smartphones and connected devices.

Now, back to Intel: Intel and AMD are teaming up to put an AMD graphics core inside an Intel chip for notebook computers. This may not seem like a big deal, but it’s huge. Intel and AMD have been rivals since the creation of AMD. AMD has the only other license to make x86 chips and for decades it has lost money acting as a foil against Intel becoming a monopoly.

Don’t get me wrong. AMD has some awesomely smart engineers who have built technology that leapfrogged what Intel was offering at the time. But execution challenges, and even dirty practices from Intel always dogged it. AMD did see the importance of graphics processors early on. In 2006 it purchased ATI Technologies, which made graphic cards, and ended up with GPUs that would later help AMD stay competitive with Intel as parallel processing became more and more important in compute.

It even sold the mobile graphics division to Qualcomm, which then used it to build better graphics into its applications processors.

Intel is putting the AMD Radeon graphics tech inside an Intel Core chip designed for the notebook market in a deal that signals Intel’s acceptance of its lack of graphics horsepower. Intel tried to design a graphics chip back in 2008 but eventually gave up after realizing its architecture wasn’t competitive with AMD’s or Nvidia’s GPUs.

Mostly the Intel/AMD partnership is about the new Intel recognizing that the heyday of general purpose compute is over and that the x86 architecture can’t do everything, especially in a constrained power environment. Under CEO Brian Krzanich Intel had increasingly embraced the concept of heterogeneous architectures from custom-made chips for machine learning to the ARM architecture of mobile. So why wouldn’t it work with its former arch-rival?

After all, like every chip company in a world where non-custom silicon is everywhere and worth less and less, Intel has to survive. To do this, it has to make its chips work everywhere they can and ensure that they still sell for a premium.

At a macro level, both deals are a result of more computing in more places putting pressure on pricing, power consumption, as well as the shifting market for semiconductor companies who may see their customers graduate to making their own silicon as they succeed. Stuck in the middle, chip firms have to consolidate to survive.

Stacey on IoT | Internet of Things news and analysis

KRACK Wi-Fi vulnerability disclosed: What it is and what you need to do from here

A serious weakness in the WPA2 Wi-Fi protocol could put almost every wirelessly connected device at risk from a security attack – with IoT-enabled devices at a major risk.

The vulnerability, known as KRACK, for ‘Key Reinstallation Attacks’, was discovered by Mathy Vanhoef of imec-DistriNet, KU Leuven, and puts doubt on the ‘four way handshake’, a method of securing Wi-Fi which has previously been mathematically proven as secure.

If a Wi-Fi network is protected, the four way handshake is used to generate a new session key. However, as Vanhoef argues, the ‘formal proof does not assure a key is installed once. Instead, it only assures the negotiated key remains secret, and that handshake messages cannot be forged.’ Vanhoef experimented in processing the third message in the four way handshake, and found that when the function was called twice it reset.

In other words, if exploited, attackers can get in, access any WPA2 network without a password, and then unleash whatever they please into the network traffic.

Vanhoef added that the attack developed was ‘especially catastrophic’ against version 2.4 and above of wpa_supplicant, a Wi-Fi client commonly used on Linux, as well as version 6.0 and above of Android. “Any device that uses Wi-Fi is likely vulnerable,” he added.

A statement from the Wi-Fi Alliance said it was aware of the vulnerability and the industry was already deploying patches to Wi-Fi users, adding users should expect all of their Wi-Fi devices, patched or unpatched, to ‘continue working well together.’

“There is no evidence that the vulnerability has been exploited maliciously, and Wi-Fi Alliance has taken immediate steps to ensure users can continue to count on Wi-Fi to deliver strong security protections,” the statement read. “Wi-Fi Alliance now requires testing for this vulnerability within our global certification lab network and has provided a vulnerability detection tool for use by any Wi-Fi Alliance member.

“Wi-Fi Alliance is also broadly communicating details on this vulnerability and remedies to device vendors and encouraging them to work with their solution providers to rapidly integrate any necessary patches,” the statement added. “As always, Wi-Fi users should ensure they have installed the latest recommended updates from device manufacturers.”

While leading tech companies are all saying they are working on the problem, there are things users can do in the meantime. The first is not to underestimate the severity of the risk. Brian Knopf, senior director of security research and IoT architect at Neustar, called it a ‘significant exploit’ while Rodney Joffe, senior VP and senior technologist called it a ‘big deal’.

Aside from taking precautions such as updating client devices and routers and changing Wi-Fi passwords – Vanhoef said the latter would not mitigate an attack but is nevertheless good practice – the next step is using a VPN.

“ISPs can take years to switch to routers with a safer protocol,” said Marty Kamden, CMO of NordVPN. “That’s another situation where users should take their Internet security into their own hands. Everyone should assume that their network is now vulnerable, and take precautions. VPNs remain the strongest defence from these types of vulnerabilities.”

Knopf added that while VPN ‘may help in some cases’, it was not beyond the realms of possibility that exploits for VPN could be chained together with KRACK.

You can find out more about KRACK here and read the full research report here. Latest from the homepage

The future is here and tech firms own it

Google’s line of new hardware, including stuff that used to have nothing to do with computing.

My obsession with the internet of things is in part an obsession with understanding the future. It’s awesome when a moment comes along that perfectly encapsulates the future you’ve vaguely envisioned. Wednesday’s Google event was such a moment for me.

It wasn’t the phones or computers that clarified the future, it was the new speakers, the earbuds that can translate 40 languages in real time, and the weird snippet-taking camera device. I’ve spent years talking about how connectivity and machine learning (or AI) will generate a business transformation for everyone.

I usually focus on business models and what it means for companies when they have more access to data, but Google showed what it means for product development. In doing so it also clarified what it means to embed technology into everyday products — a phrase that gets tossed off in everyday conversation without much meaning attached. But these three devices show how connectivity and machine learning change everything.

With the $ 399 stand-alone Home Max speaker, Google has taken its research in machine learning and audio codecs to build a speaker that understands where it is in the room and what is happening around it. It then adjusts.

Essentially, Google has made a context-aware speaker that adapts to its environment. That’s amazing. And other than Sonos, I can’t really see another speaker company coming close to truly changing the game on speakers. Even if Google doesn’t sell many of these speakers, it has clearly applied technology that should push every other speaker company to think differently about how it’s improving on the audio experience.

A similar question might be asked of the Apple AirPods or Google’s Pixel Buds. Both take the concept of turning a bluetooth headset into an extension of your computer.  Why didn’t JBL invent the Max Home speaker? Why didn’t Bose invent something like Google’s Pixel earbuds with their real-time translation capabilities?

In the case of the headphones it’s likely because, as owners of the phone, both Apple and Google can easily offload compute-intensive tasks to smartphones while limiting others’ access to the hardware. But in other cases, like the speakers and the Google Clips, it’s a question of culture,  and a lack of deep technical expertise outside their core business.

For example, Bose has innovated plenty with noise cancellation technology that also reacts to the volume of the world around the headphone wearer.  These Bose hearables also use algorithms to help tune headphones so people can hear better in crowded or noisy environments. So why stop there and not recognize that making context-aware, in-room speakers might also improve sound quality. One option is that the Home Max speakers are a gimmick that won’t change the experience much. Another reason might be because Bose simply doesn’t have the data or data scientists needed to make the tech work, because it hasn’t had connectivity in its products for as long or thoughts about using them in that way.

And this leads us to the Google Clips, a $ 249 tiny camera that will record photos and video snippets with the press of a shutter, or on its own. It’s designed so a user can set it up and forget about it. Google Clips has the “smarts” to recognize the people who matter to the user as well as the ability to recognize when to take a photo (so says Google). What’s notable is all of those smarts run locally on the device.

Clips is confusing because it’s expensive and the market seems limited. Yes, parents might think this is neat, but parents generally have connected baby monitors snapping photos of their kid as well as a fast finger on the smartphone camera button. But when viewed as a research project in shrinking computer vision on a device, or a way to get more training data to help computers learn what makes a good picture, Clips makes sense.

There are few companies that can invest in producing a piece of hardware that has a limited market value with a goal of getting the right kind of data to train and test a new computer vision model, or a smaller computer vision module. This is why, as tech invades more and more of our devices, the giants of the technology world are stretching to build products outside of computers.

Does this mean we’ll get that Apple television or a Google washer? Maybe not anytime soon, although Amazon has applied for a patent on a spoilage-sensing fridge. Tech is coming for everyone’s business and it’s not clear if anyone outside of tech has the resources it takes to win.

Stacey on IoT | Internet of Things news and analysis

Apple: The future is here: #iPhone X

Packed with Innovative Features Including a Super Retina Display, TrueDepth Camera System, Face ID and A11 Bionic Chip with Neural Engine

Are you an IoT / M2M  professional ? Join us at IoT WORLD FORUM 2017, on November 15-16, 2017 in London. ( Find out more about Global IoT Market Trends and IoT Companies & IoT Startups.

Cupertino, California — Apple today announced iPhone X, the future of the smartphone, in a gorgeous all-glass design with a beautiful 5.8-inch Super Retina display, A11 Bionic chip, wireless charging and an improved rear camera with dual optical image stabilization. iPhone X delivers an innovative and secure new way for customers to unlock, authenticate and pay using Face ID, enabled by the new TrueDepth camera. iPhone X will be available for pre-order beginning Friday, October 27 in more than 55 countries and territories, and in stores beginning Friday, November 3.

“For more than a decade, our intention has been to create an iPhone that is all display. The iPhone X is the realization of that vision,” said Jony Ive, Apple’s chief design officer. “With the introduction of iPhone ten years ago, we revolutionized the mobile phone with Multi-Touch. iPhone X marks a new era for iPhone — one in which the device disappears into the experience.”

“iPhone X is the future of the smartphone. It is packed with incredible new technologies, like the innovative TrueDepth camera system, beautiful Super Retina display and super fast A11 Bionic chip with neural engine,” said Philip Schiller, Apple’s senior vice president of Worldwide Marketing. “iPhone X enables fluid new user experiences — from unlocking your iPhone with Face ID, to playing immersive AR games, to sharing Animoji in Messages — it is the beginning of the next ten years for iPhone.”

Gorgeous All-Screen Design

iPhone X introduces a revolutionary design with a stunning all-screen display that precisely follows the curve of the device, clear to the elegantly rounded corners. The all-glass front and back feature the most durable glass ever in a smartphone in silver or space gray, while a highly polished, surgical-grade stainless steel band seamlessly wraps around and reinforces iPhone X. A seven-layer color process allows for precise color hues and opacity on the glass finish, and a reflective optical layer enhances the rich colors, making the design as elegant as it is durable, while maintaining water and dust resistance.1

Remarkable Super Retina Display

The beautiful 5.8-inch Super Retina display2 is the first OLED panel that rises to the standards of iPhone, with stunning colors, true blacks, a million-to-one contrast ratio and wide color support with the best system-wide color management in a smartphone. The HDR display supports Dolby Vision and HDR10, which together make photo and video content look even more amazing. The addition of True Tone dynamically adjusts the white balance of the display to match the surrounding light for a more natural, paper-like viewing experience. 

Familiar gestures allow customers to naturally and intuitively navigate iPhone X.

iOS 11 is redesigned to take full advantage of the Super Retina display and replaces the Home button with fast and fluid gestures, allowing customers to naturally and intuitively navigate iPhone X. Simply swipe up from the bottom to go home from anywhere.

Face ID, a Powerful and Secure Authentication System

Face ID revolutionizes authentication on iPhone X, using a state-of-the-art TrueDepth camera system made up of a dot projector, infrared camera and flood illuminator, and is powered by A11 Bionic to accurately map and recognize a face. These advanced depth-sensing technologies work together to securely unlock iPhone, enable Apple Pay, gain access to secure apps and many more new features.
Face ID projects more than 30,000 invisible IR dots. The IR image and dot pattern are pushed through neural networks to create a mathematical model of your face and send the data to the secure enclave to confirm a match, while adapting to physical changes in appearance over time. All saved facial information is protected by the secure enclave to keep data extremely secure, while all of the processing is done on-device and not in the cloud to protect user privacy. Face ID only unlocks iPhone X when customers look at it and is designed to prevent spoofing by photos or masks.

Reinvented Front and Back Cameras Featuring Portrait Lighting

The new 7-megapixel TrueDepth camera that enables Face ID features wide color capture, auto image stabilization and precise exposure control, and brings Portrait mode to the front camera for stunning selfies with a depth-of-field effect.
iPhone X also features a redesigned dual 12-megapixel rear camera system with dual optical image stabilization. The ƒ/1.8 aperture on the wide-angle camera joins an improved ƒ/2.4 aperture on the telephoto camera for better photos and videos. A new color filter, deeper pixels and an improved Apple-designed image signal processor delivers advanced pixel processing, wide color capture, faster autofocus in low light and better HDR photos. A new quad LED True Tone Flash offers twice the uniformity of light and includes Slow Sync, resulting in more uniformly lit backgrounds and foregrounds.
The cameras on iPhone X are custom tuned for the ultimate AR experience. Each camera is individually calibrated, with new gyroscopes and accelerometers for accurate motion tracking. The A11 Bionic CPU handles world tracking, scene recognition and the GPU enables incredible graphics at 60fps, while the image signal processor does real-time lighting estimation. With ARKit, iOS developers can take advantage of the TrueDepth camera and the rear cameras to create games and apps offering fantastically immersive and fluid experiences that go far beyond the screen.

The new camera also delivers the highest quality video capture ever in a smartphone, with better video stabilization, 4K video up to 60fps and 1080p slo-mo up to 240fps. The Apple-designed video encoder provides real-time image and motion analysis for optimal quality video.
Portrait mode with Portrait Lighting on both the front and rear cameras brings dramatic studio lighting effects to iPhone and allows customers to capture stunning portraits with a shallow depth-of-field effect in five different lighting styles.3
With iOS 11, iPhone X supports HEIF and HEVC for up to two times compression and storage for twice the photos and videos.

Animoji Brings Emoji to Life

The TrueDepth camera brings emoji to life in a fun new way with Animoji. Working with A11 Bionic, the TrueDepth camera captures and analyzes over 50 different facial muscle movements, then animates those expressions in a dozen different Animoji, including a panda, unicorn and robot. Available as an iMessage app pre-installed on iPhone X, customers can record and send Animoji messages with their voice that can smile, frown and more.

Using the TrueDepth camera, iPhone X brings emoji to life in a fun new way with Animoji.

Introducing A11 Bionic

A11 Bionic, the most powerful and smartest chip ever in a smartphone, features a six-core CPU design with two performance cores that are 25 percent faster and four efficiency cores that are 70 percent faster than the A10 Fusion, offering industry-leading performance and energy efficiency. A new, second-generation performance controller can harness all six cores simultaneously, delivering up to 70 percent greater performance for multi-threaded workloads, giving customers more power while lasting two hours longer than iPhone 7. A11 Bionic also integrates an Apple-designed GPU with a three-core design that delivers up to 30 percent faster graphics performance than the previous generation. All this power enables incredible new machine learning, AR apps and immersive 3D games.

The neural engine in A11 Bionic is purpose-built for machine learning, augmented reality apps and immersive 3D games.

The new A11 Bionic neural engine is a dual-core design and performs up to 600 billion operations per second for real-time processing. A11 Bionic neural engine is designed for specific machine learning algorithms and enables Face ID, Animoji and other features.

Designed for a Wireless Future

The glass back design enables a world-class wireless charging solution. Wireless charging works with the established Qi ecosystem, including two new wireless charging mats from Belkin and mophie, available and Apple Stores.

Apple gave a sneak peek of AirPower, an Apple-designed wireless charging accessory coming in 2018, which offers a generous active charging area that will allow iPhone 8, iPhone 8 Plus or iPhone X customers to simultaneously charge up to three devices, including Apple Watch Series 3 and a new optional wireless charging case for AirPods.

Pricing and Availability
  • iPhone X will be available in silver and space gray in 64GB and 256GB models starting at $ 999 (US) from and Apple Stores and is also available through Apple Authorized Resellers and carriers (prices may vary).
  • Through Apple’s iPhone Upgrade Program, customers in the US can get iPhone X, with the protection of AppleCare+, choose their carrier (no multiyear service contract required) and have the opportunity to upgrade to a new iPhone every year. The iPhone Upgrade Program is available for iPhone X at and Apple Stores in the US with monthly payments starting at $ 49.91.4
  • Customers will be able to order iPhone X beginning Friday, October 27, with availability beginning Friday, November 3, in Andorra, Australia, Austria, Bahrain, Belgium, Bulgaria, Canada, China, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Greenland, Guernsey, Hong Kong, Hungary, Iceland, India, Ireland, Isle of Man, Italy, Japan, Jersey, Kuwait, Latvia, Liechtenstein, Lithuania, Luxembourg, Malta, Mexico, Monaco, Netherlands, New Zealand, Norway, Poland, Portugal, Puerto Rico, Qatar, Romania, Russia, Saudi Arabia, Singapore, Slovakia, Slovenia, Spain, Sweden, Switzerland, Taiwan, UAE, theUK, the US and US Virgin Islands.
  • Apple-designed accessories including leather and silicone cases in a range of colors will be available starting at $ 35 (US), while a new iPhone X Leather Folio will be available for $ 99 (US). Lightning Docks in color-matching metallic finishes will also be available for $ 49 (US), prices may vary.
  • Every customer who buys iPhone X from Apple will be offered free Personal Setup in-store or online to help them customize their iPhone by setting up email, showing them new apps from the App Store and more.5
  • Anyone who wants to start with the basics or go further with iPhone X or iOS 11 can sign up for free Today at Apple sessions at


Images of iPhone X

Apple revolutionized personal technology with the introduction of the Macintosh in 1984. Today, Apple leads the world in innovation with iPhone, iPad, Mac, Apple Watch and Apple TV. Apple’s four software platforms — iOS, macOS, watchOS and tvOS — provide seamless experiences across all Apple devices and empower people with breakthrough services including the App Store, Apple Music, Apple Pay and iCloud. Apple’s more than 100,000 employees are dedicated to making the best products on earth, and to leaving the world better than we found it.

Machine Learning Magazine