TechCrunch
Airport Car Rental Service Silvercar Is Headed To LAX In November
Silvercar continues to expand its tech-focused airport car rental service into new markets, and in a few weeks will be making its biggest debut yet. According to an email sent to customers late last week, Silvercar said that on November 4 it will be launching at LAX, which will soon house its largest fleet of Audi A4s.
As we’ve written before, Silvercar hopes to revolutionize the airport car rental business, by simplifying the process of getting a car and paying through its mobile app. In doing so, it does away with all the usual issues that people run into when renting a car — the long lines, the constant upsell, having to worry about whether a car has GPS and whatnot.
Instead, Silvercar has one make and model of car available — the Audi A4 — so there’s no choosing between different classes of vehicles or worrying overly much about upgrades or features. All cars can be unlocked through the Silvercar mobile app and have GPS and in-car WiFi for getting around. So all a renter needs to do is show up and take the car out.
The launch at LAX is a big move for Silvercar, which has been gradually expanding since launching at Dallas/Fort Worth late last year. Since then, it’s launched in Austin, Houston, and Dallas Love Field before opening for business at SFO in August.
The Los Angeles airport will house the company’s largest fleet of vehicles, as it seeks to go after what is one of the largest airport car rental markets in the country. LAX not only does a huge volume of rentals — but it also is home to a number of tech-savvy business people who like to drive in style. So offering up an Audi A4 and a VIP, no-hassle experience to renters could be a huge win for the startup.
Believe it or not, customers have already begun booking rentals from LAX even though the service doesn’t launch for about three weeks, according to a representative for SilverCar. As for why Silvercar is waiting until November 4th before launch — the company is expecting huge demand for Austin City Limits, and will have a lot of its fleet in town for that, before moving several cars over to LAX.
Check out a screen grab of the email sent to customers below:
Glass Theft Auto: Google Glass Hack Beams Grand Theft Auto's GPS Straight To Your Eye
You know, it’s been a while since I’ve written up some some clever Google Glass hack simply because it was awesome. Lets fix that.
Looking to test the concept of using Glass as a second screen, Android developer Mike DiGiovanni has managed to capture Grand Theft Auto’s oh-so-crucial in-game GPS interface, beaming it to the player’s eyepiece in real time.
Now, if you’ve spent every minute since GTA V’s release blasting around Los Santos, one caveat: Mike had to go back a few generations to make this work. It requires GTA to be running on a computer, which, as many a scorned PC gamer could tell you, means Grand Theft Auto V is out. GTA 4, meanwhile, didn’t want to boot up on any of Mike’s systems. So this is all built around 2001′s Grand Theft Auto 3.
(A render of what the player sees when using Mike’s setup. Capturing and properly portraying things actually running on Google Glass is really, really tough — hence the lack of video).
So, how does it all work?
While Google has promised to give developers a way to communicate from device-to-device, they haven’t released much on that front yet. So Mike built his own two-part solution; on the PC, you’ve got an app that is capturing the portion of the screen where GTA’s on-screen GPS unit sits and sending it off to your Glass unit. On Glass, you’ve got an application (built on the “plain old Android SDK”, as Mike tells me, since Google has yet to release the official, native Glass SDK) that listens for the GPS visuals to be fired over across the WiFi network and then pushes them to the display.
Is it a bit hacky? Absolutely! But as a proof-of-concept to demonstrate how devices like Glass can be used as a secondary display, it gets the job done, and does a damned good job of conjuring up further concepts. Imagine being able to say “Okay Glass, set waypoint to Ammu-Nation” instead of having to pause the damn game every time. Imagine Metal Gear Solid’s signature video chats floating in front of your eyes as the game carries on. Are developers likely to embrace relatively niche wearables like Glass any time soon? Probably not – but it’s a damned fun thing to think about.
Mike tells me that the whole thing runs at “about 10 frames per second” with minimal delay. “[It's working] to the point where you could look only at Glass and still drive around” he says. “You would likely run into [virtual] pedestrians, but you could definitely drive around the streets perfectly.”
Before you start diggin’ around for a link to download the app for yourself, a heads up: while Mike says he “definitely has plans to release it”, it’s a “very fragile proof of concept” at this point. Amongst other things, the heavy use of WiFi paired with a need for the display to be always on means that the app chews through Google Glass’ battery in about an hour. While he hopes to release it in the future, he’ll likely hold off until Google releases the official Glass SDK and he gets a chance to polish it up accordingly. Mike previously made headlines with the release of winky, a Glass app that lets you snap photos by winking your right eye, and he’s released a number of other apps on his personal site here.
I got a chance to chat with Mike about the project and the technical hurdles involved. Our chat provided some pretty interesting insights into the current state of Glass development (and his plans for this project moving forward), so I’ve pasted the dialog (with Mike’s permission) below:
Can you explain the setup a bit?
Right now there’s no officially supported way for a Glass device to communicate with another device in real time. Older versions of Glass and the My Glass companion app for Android hinted at a way that we could communicate between an app on a phone and Glass, but those have disappeared from recent builds.
This [project] sets up a channel for network communication between an app on Glass and a small piece of software on your PC. Once that’s setup and you start playing GTA, you will see the GPS navigation on the display of Glass. It’s really similar to how you would use the built in GPS navigation of Glass in a real car.
How fast does it update? Is it in realtime?
It currently gets about 10 frames per second, but is effectively realtime and smooth enough to the point where you could look only at Glass and still drive around. You would likely run into pedestrians, but you could definitely drive around the streets perfectly. It’s definitely technologically possibly to improve that to a point where it’s as smooth and fast as watching any video.
What sort of tech did you use here? Which SDK?
I used the plain old Android SDK. The real GDK for Glass development is rumored to be coming out this month, but right now you can dive in with the standard Android SDK. There’s a handful things to watch out for, but after releasing quite a few native Android SDK apps for Glass, I know what to keep in mind. I actually just gave a talk to developers in Toronto at Screens 2013 about how to develop for Glass without owning the Glass hardware, in which a large portion of it spoke about the potential pitfalls of using the standard Android SDK.
How are you grabbing the data and pushing it to Glass?
The tech side of it is really basic. We have a small piece of software running on the PC that’s playing GTA. This software is pretty much pulling a subsection of the screen and streaming it to Glass over your network.
There are a few downsides to this approach, it doesn’t look amazing and it’s prone to network congestion.
If given the chance to turn this into a fully feature product, we would likely make a decision to build up the software that runs on the PC and do image analysis to turn the map data into non-bitmap data. This would likely result in a much crisper image on Glass and give us the ability to customize the map completely. This is pretty much the perfect solution to “hacking” support onto any sort of existing games.
If we had the opportunity to turn this into an officially supported function of a game or other product, it becomes much easier to send perfect data or images to Glass directly from the game.
All in all, the development side of things for this project is pretty mundane. The most exciting part about this has been opening the conversation up to using Glass or even other wearables as a second screen. It’s a use case that hasn’t been talked about much. It could be something as simple as moving your hud or map to a wearable device as is demonstrated with this software. Or it could be similar to how the Wii-U tries to bring forward asymmetric multiplayer gaming.
How long does the Glass battery last, doing something like this?
Not very long at all, less than an hour.
One of the biggest battery killer on Glass is keeping the screen on. Right now, this software keeps the screen on all the time. There’s potential ways to address this, like letting the screen go off after a certain period of time, but there’s a usability problem with that with the current SDK. Whenever the Glass screen goes off, you get kicked out of any app that you were running. This means you would have to run this piece of software on Glass every time the screen turned off, it’s a terrible experience.
There’s a few “hacks” that you could do to get around that, but it’s not likely something that would continue to work in the long run, and I do believe that when the GDK is released, we will have a real solution for this. With that in mind it’s something I decided not to address at this point.
How long did it take to build?
It only took a few short hours to put it together. I worked on it during my train commute to work over the course of a few days. Of course, that’s just a very fragile proof of concept, that I can reliably run on my computer, with my phone, and my Glass setup through my phone acting as a router.
At this point, for anyone else it would probably fall apart. I have a list of things to cleanup to get it working in a more general less controlled environment so others could experience it. I definitely have plans to release it similarly to my other apps. I may hold it off until we get the real GDK, in the hopes that the user experience and battery issues could be addressed
Fly Or Die: iPhone 5s
What’s there to say? It’s the flagship iPhone 5s.
To say that it won’t be wildly successful would be silly. We already know that Apple sold 9 million units of the iPhone 5s, alongside the more colorful iPhone 5c, in the very first weekend of availability. That’s more than any previous generation.
So instead of asking ourselves whether this finger print-reading, awesome picture taking, gold-clad phone is a viable product or not, we should ask ourselves if it’s worth upgrading from the iPhone 5 or the iPhone 4S before it.
The three major upgrades on the phone are the TouchID sensor, letting you unlock your phone or submit purchases with a quick scan of your finger, as well as a major camera update and a processor bump.
Where the camera is concerned, I’ve played around with this TrueTone flash a lot ore after shooting this review, and I’m not as impressed as I’d like to be, though I still think it’s a fine improvement over the original, white-washing flash. I’m far more excited about the camera’s ability to zoom and remain more crisp than before, and slow-motion video functionality is also quite impressive.
In terms of processing speed, daily activities don’t yield a noticeable improvement, as you can see in this video. But I feel as thought the M7 motion coprocessor makes a big difference with the little things, like being constantly asked to join wifi networks.
Last, but certainly not least, the TouchID feature is the most surprising to me. After a couple weeks of using TouchID, something I didn’t expect to care about at all, it’s the one feature I’ve grown most attached to. It only shaves a second or two off of unlocking time, but it’s easy to be spoiled by it.
Not only that, but TouchID is clearly a building block toward a new way of computing. Combine a Siri google search with a quick TouchID unlock and you have answers right before your eyes, with nary a virtual key pressed.
Two flies.
Student Tablet Hardware Melts, District Suspends $30 Million Amplify Program On Safety Concerns
A North Carolina school district has suspended the use of 15,000 tablets after reports of multiple hardware issues, including the device’s charger melting at home. Guilford County Superintendent Maurice “Mo” Green has suspended the $30 million program on safety concerns.
The recall is a major sting for NewsCorp’s Amplify, which released details of its digital-first education initiative back at TechCrunch Disrupt 2012. Directed by former New York City education chancellor, Joel Klein, there are high hopes that Amplify can help bring K-12 education into the 21st century. But, melting tablet accessories aren’t a good sign.
“We recognize that suspending the program on short notice is going to be disruptive to students, staff and parents,” Green Explained. “My decision was made out of an abundance of caution, and I decided to err on the side of safety.”
Apparently, that’s not the only problem. As reported by News & Record,
“Parent Linda Mozell said her daughter and other students at Southeast Middle School had repeated problems connecting to the Internet with their tablets. And even though her daughter got one of the “hard shell” protective cases, that caused its own set of problems, she said. The keyboard’s hard-shell case kept rubbing against the tablet screen in a way that could scar it, she said. In addition, the cord connecting the tablet and keyboard broke easily, the stylus was too big for easy use, and the equipment came home without a user’s manual.”
Amplify has given us a response (pasted in full below) and tells us that the breakage rate of screens is around 3%, which compares to Asus’s industry average, around 2.5%. An Amplify spokesperson says the melting charger is (so far) an isolated incident.
Amplify and Guilford county aren’t the only ones experiencing hiccups with tablet. Los Angeles Unified suspended it’s 1-for-1 iPad program after students hacked through the filters, granting them full-fledged access to the bountiful wonders of the Internet.
Presumably the next round of Amplify’s tablets will not pose a safety risk to children. Amplify’s response is below:
“This week our largest customer, Guilford County Schools, informed us that a tablet charger, which was manufactured by ASUS, was partially melted while charging a student’s tablet at home overnight.
We are working to determine whether the issue was caused by an electrical problem in the student’s home or because of a manufacturing defect.
While the problem occurred with only one of the more than 500,000 chargers of this kind that ASUS has manufactured and distributed across the world, one instance is too many in our opinion. Nothing comes before the safety of our students, teachers and their families.
Out of an abundance of caution, we are requesting that Amplify Tablet customers cease all further use of the ASUS charger until we can determine the cause of the single reported malfunction in Guilford County, North Carolina.”