LightBlog

mardi 31 janvier 2017

Tensorflow RC 1.0 Released, Android Optimizations Among New Features

Feature Image Displays Picture A in the Style of Famous Paintings B,C, and D – Image Credit: Google Research Blogs

Tensorflow – an open-source neural network platform from the Google Brain team – has made available the release candidate for version 1.0 of its increasingly popular machine learning platform. Some of the most exciting new features include pre-made neural networks for Android cameras (person/object detection as well as artistic style transfer), a Java API, and Accelerated Linear Algebra (XLA) integration – a compiler which aims to lessen resource load and optimize applications for mobile use.

Tübingen Neckarfront, Germany by Andreas Praefcke in the style of “Head of a Clown” by Georges Rouault – Image Credit: Google Research Blogs


Improvements in Python and Java

In this version, Python interaction has been upgraded, adopting some of Python’s own syntax and metaphors. Unfortunately, this means that previous Python-based applications of Tensorflow will need to be upgraded to continue functioning in 1.0. Although a conversion script has been released, some scripts still may need to be modified manually. Installing via Python is compatible with MacOS, Linux, and Windows – available as a Pip, Anaconda, or Docker install, among others.

An experimental Java API has also emerged, but for now requires a Linux or MacOS environment to be built from source code.

XLA in Mobile and Beyond

The creators of Tensorflow have long been committed to a universal API, and the implementation of XLA can certainly help. XLA, in essence, utilizes the CPU or GPU to optimize the translation of data from layer to layer of the neural network, resulting in reduced resource load, increased speed, less code, and an overall smaller, more lightweight application. This optimization will also facilitate the porting of server run networks to mobile hardware. Providing the structure, and in some applications, the data to create or use a neural network, Tensorflow’s single API can be used for implementation across desktops, servers, or mobile devices – all that’s required is a unique backend. While offering great potential, XLA is still experimental and the team behind it has asked explicitly for developer input, so that they can quickly bring better machine learning performance to mobile platforms – they’re looking at you Snapdragon 835 fans.

Qualcomm, IBM, Raspberry Pi, Snapchat, and of course Google are a few names on the growing list of companies to either add support for or work closely with Tensorflow and its dedicated team. With this release, the team edges ever-closer to delivering freely implemented neural networks to application developers and consumers alike.


Interested in developing? Check out these links:

Source 1: Tensorflow Source 2: Infoworld



from xda-developers http://ift.tt/2kNI0sh
via IFTTT

Project Fi is Rumored to Integrate with Google Voice Soon

To kick off last week with a bang, Google finally started rolling out a big update to an application they’ve been ignoring for the longest time – Google Voice. This seems to be the latest step in Google’s overarching goal of moving regular users away from their Hangouts application. With Hangouts shifting towards becoming an enterprise-first service, the millions of people who use Hangouts will need an alternative. For most, they can move over to applications like Allo and Duo, but this leaves many Project Fi users in the dark.

As of right now, Google doesn’t pre-install Hangouts on the Pixel or the Pixel XL unless you buy either through Project Fi. Many Project Fi customers are currently using Hangouts for certain day to day features, such as synchronized messaging on multiple platforms. When Google completes the shift of Hangouts to the enterprise market, they will need a solution for current Project Fi users. A new rumor from 9to5Google claims that Project Fi may be integrated into Google Voice in the near future.

While Google has yet to confirm this integration, they do seem to hint at some sort of solution. They know that a lot of Project Fi customers rely on Hangouts and have told 9to5Google that they should  “continue to use Hangouts” while Google is actively “working on a solution.” They don’t mention Google Voice being the solution, but sources close to 9to5Google are telling them that Project Fi integration will be a “keystone” feature with the new Google Voice update.

9to5Google trust this source as they have been correct with rumors in the past. Their source is the same person who previously told 9to5Google that VoIP integration would be coming to Google Voice in the future (which Google then confirmed). Still, as with most rumors, we should take this one with a grain of salt, but we could see things unfold this way since the information does come from what is said to be a reliable source.


Source: 9to5Google



from xda-developers http://ift.tt/2knmGg0
via IFTTT

ZTE Admits Kickstarter isn’t the Place to Sell Project CSX/Hawkeye

Late last year, ZTE decided they wanted to sell a smartphone with ideas which were crowdsourced from the community. This device was given the name Project CSX and with it they wanted the community to decide everything that went into the phone.

The main rules when launching this project were it needing to be a mobile product, it needed to be affordable, and it need to be technically possible for a launch in 2017. Naturally, this turned into a project for a smartphone and the community was given 5 different features to vote on.

As fans of stock Android, many of us here at XDA wanted this option to win out when we wrote about it in October of last year. The other choices here were eye-tracking and self-adhesion, a glove accessory that would be powered by Android, intelligent cases for phones like a gamepad, stylus or e-ink flip cover and lastly a VR-interactive diving mask. Many of these were way out of the left field and would obviously take years for a company to perfect before bringing it to market.

So the idea of a phone with eye-tracking and self-adhesion eventually won out (don’t ask us how) and this is how we ended up with the phone that would soon be named Hawkeye. It was less than two weeks ago when they launched their Kickstarter campaign for they ZTE Hawkeye, and we learned what type of hardware they chose for the device. Many were upset that after such a long buildup, they ended up choosing the Snapdragon 625 SoC for the device.

In a new interview with Android Central, ZTE’s Vice President of Technology Partnerships and Planning for ZTE North America admits they should have approached this idea differently. Jeff Yee believes they would have been more granular with the idea and asked the community if they wanted the Snapdragon 835 or the 625, then asked if they wanted the fingerprint scanner on the front or the back. The Kickstarter campaign has raised less than $40,000 (out of the $500,000 they asked for), and tells us they are considering cancelling the project altogether.

Mr Yee says if they do end up cancelling it, we could see the eye-tracking and self-adhesion feature appear in a future ZTE flagship (please, please don’t ruin the Axon line with this).

Source: Android Central



from xda-developers http://ift.tt/2kLPMTr
via IFTTT

Updated LineageOS 13.0 for the NVIDIA SHIELD Tablet

If you have a NVIDIA SHIELD Tablet, check out this LineageOS build based on Android 6.0 with the latest Lineage OS commits! Head on over to the forum post for the download link!



from xda-developers http://ift.tt/2kNJmX9
via IFTTT

Google Announces Android Nougat 7.1.2, Public Beta Starts Rolling Out For Pixel and Nexus Devices

In an official blog post earlier, Google has announced the beta version of the upcoming Android Maintenance Release: Android Nougat 7.1.2. The public beta update will start rolling out starting today for supported devices. As always, the update will only be rolled out to devices which are enrolled in Android Beta Program.

The supported devices include the Google Pixel, Pixel XL, Nexus 5X, Nexus Player, and the Pixel C. Unfortunately, the Nexus 6 and Nexus 9 won’t be receiving the Android Nougat 7.1.2 as confirmed by Google earlier. This shouldn’t be a surprise at all since both the devices have already passed the 2-year software support period a while back. Although, both will continue to receive the monthly security patches for one more year.

Google says the update is focused on refinements and it includes a lot of bugfixes as well as many under-the-hood optimizations. In case you have previously enrolled your device in the program, you don’t require to do anything at all; the update will automatically be rolled out to your device in next few days. If not, enroll your eligible device in Android Beta Program here. Alternatively, you can also update your device manually by grabbing the factory images for your device from here.

The update changelog has not yet been disclosed by Google, though a release note posted on the Android Developers site for the Android 7.1.2 update outlines some of the known bugs in the beta update. That include Quick Settings issue on the Pixel C, occasional UI hangs, WiFi stability issues, screen turning black during the transition from boot animation to setup wizard and more.

As for when we will see the final release, Google says they’re expecting to release final build of Android 7.1.2 in “next couple of months.”

Source: Android Developers Blog



from xda-developers http://ift.tt/2jqPpAF
via IFTTT

Nextbit has Officially been Acquired by Razer

You may know of Razer as the company who sells high-end PCs and PC accessories, but they got into the Android business in 2015 and even relaunched the OUYA store as Cortex for the Razer Forge TV. They haven’t been very active in the community since then, but it seems they aren’t done with the Android ecosystem either. Yesterday, it was announced that Razer had acquired both the assets and the entire 30-person team behind the Nextbit Robin.

So, many are asking what this means for Nextbit’s flagship smartphone. Nextbit has confirmed they are no longer selling the Robin and its accessories through their official channels (although you can still buy it from Amazon as I write this up). Any remaining units being sold right now are part of the company’s last batch of devices and there will not be any additional ones manufactured. Nextbit has also announced what this means for customers who currently own the Robin.

Nextbit CEO Tom Moss says they will continue to offer hardware support for the Nextbit Robin for 6 more months (for warranties and such). On top of that, current customers can expect to receive software updates (both new Android OS updates and security patches) for the next 12 months. After that though, Nextbit will no longer be working on the Robin. Instead, they are said to be working as an “independent division inside Razer,” and will be focused on “unique mobile design and experiences.”

While this is definitely bad news for those who wanted to see Nextbit succeed, it’s certainly a better outcome than other technology companies faced when they had to sell their assets. We’ll have to wait and see if the team continues to work on mobile hardware, or if they will be focused on integrating their cloud technology into current and future Razer products.

Source: Nextbit



from xda-developers http://ift.tt/2jRbqoJ
via IFTTT

lundi 30 janvier 2017

App Fixes the Quick Settings Flashlight Tile for the Redmi Note 3 Pro (Kenzo)

Some people with the Redmi Note 3 Pro have been having trouble with their Flashlight tile within the Quick Settings panel. XDA Recognized Themer Umang96 has recently released a root app with the help of XDA Junior Member shayanism and XDA Member hichaam.



from xda-developers http://ift.tt/2jMGpEu
via IFTTT

XDA-Developers Invites Your Ideas for the Google Summer of Code Program!

XDA-Developers was founded on the need to work with closed source software, often in an effort to fix what the manufacturer broke or intentionally disabled on their smart devices. Since then, we have evolved to place more and more emphasis on open source projects. Open sourced software is much easier for developers to work with and is a great starting place for beginning developers to learn how to code.

With that in mind, we are proud to announce that XDA-Developers is applying to become a Mentor Organization for Google Summer of Code (GSoC).


What is Google Summer of Code?

Google Summer of Code is a global program focused on introducing student developers to open source software. Students apply and work on a 3 month open source project with a mentor organization during their (summer) break from U.S.-based university schedule. Student participants (minimum of 18 years old and who have completed secondary school) are paired with a mentor from participating organizations, which allows them to gain exposure to real world software development.

What’s more, the incentive for students participating in this program is not just hands-on experience working on a project – you also get paid to contribute to open source projects!

You can learn more about Google Summer of Code over on its official page here. Additionally, you can view previous GSoC projects over here to get an idea of what kinds of projects other students have worked on in the past. Finally, you can also refer to the 2016 GSoC Archive and FLOSS Manuals.

XDA-Developers as a Mentor Organization

For the 2017 Google Summer of Code, XDA-Developers is applying to be a Mentor Organization for the very first time. We believe that open source code is the future of mobile software development. Our participation in the Google Summer of Code program is a small way of promoting the advantages of open source projects and implanting a love of open source development in the next generation of developers.

The first step in our application process is to invite project ideas from you as a community. These are ideas for projects that can be completed in about 12 weeks of coding. Anyone can submit a feasible idea, but we would really recommend taking a look at previous projects to get a good grasp on what the GSoC typically expects.

Here is the requirement set for the ideas:

  1. A project title/description
  2. More detailed description of the project (2-5 sentences)
  3. Expected outcomes
  4. Skills required/preferred
  5. Possible mentors at XDA (can be the idea submittor but not a student)
  6. If possible, an easy, medium or hard rating for the project
  7. Your name and/or XDA username

All ideas are to be submitted over at our GitHub page.

The next step in our process is to build a team of Mentors. If you are able and willing, you can also apply to become a Mentor for your idea. The position requires committal to being a guidance to a budding student and building one-one-one rapport, so any developers interested in fostering a love of open source projects may apply for this position.


So, does the idea of being a mentor or working with one strike your fancy? Make sure you apply in that case. Also, let us know your thoughts in the comments below!



from xda-developers http://ift.tt/2kkZcI2
via IFTTT

Mod Enables the Samsung Gear Application for Non-Samsung Devices

XDA Recognized Developer j to the 4n noticed the Samsung Gear application didn’t work on the HTC 10, as well as some other devices. So they whipped up a mod into an APK & has been told by at least 4 others that it helped fix the issue



from xda-developers http://ift.tt/2jOHKbw
via IFTTT

App for OnePlus 3 Gives you Better Control of Pocket Mode

XDA Senior Member rituj26 was tired of some OnePlus 3 ROMs only disabling gestures while others only disabled the fingerprint sensor. So they created this little root application that gives you the ability to toggle specific features when Pocket Mode is enabled.



from xda-developers http://ift.tt/2jM267R
via IFTTT

Opinion: The OnePlus 3 and Other 2016 Devices Stand to Benefit from a Held Up & Held Back Snapdragon 835

The recent wave of reports regarding the fate of Snapdragon 835 devices seem to point at a slight delay in the arrival of Qualcomm’s latest and greatest. Furthermore, we now know a few big devices coming our way at MWC are launching with last year’s Snapdragon 821.

There is nothing wrong with the Snapdragon 821 — in fact, it was big offering from Qualcomm, a great step up from the flawed 810, and ultimately a good option for all kinds of OEMs, many of which accomplished great things in the realms of camera quality, performance and even battery life with this chipset. We’d argue that more options for OEMs to choose from is always a good thing, though, and we are certainly concerned about Qualcomm once holding a generation back again should the Snapdragon 835 arrive too late, on fewer devices or perform worse than expected. These aren’t unfounded concerns, and early figures of the processor’s performance improvements don’t suggest a year-on-year jump as prominent as what we are accustomed to, nor the kind of generational leaps we’d love to have. While 20 to 25 percent faster graphics and CPU performance is nothing to scoff at, the situation becomes a lot tougher for Qualcomm when you factor in the lead that A72-based processors and Apple chipsets already had in the CPU department, as well as the fact that A73 core chipsets and newer Mali GPUs are already being adopted by chipset makers like HiSilicon.

Suggested Reading: A Widening Gap: The A10 Fusion Puts a Chokehold on Qualcomm’s Prospects

Furthermore, these performance improvements are notably lower than what we expected in previous years. With Adreno GPUs, for example (and going by official percentages from Qualcomm), the Snapdragon 805’s Adreno 420 was reported to be 40% faster than preceding GPUs in the Snapdragon 800 and 801. The Adreno 430 in the Snapdragon 810 further boosted speed by 30%, making for a strongpoint of the 810 in spite of the thermal constraints. Finally, the Adreno 530 offers up to 40% better graphics performance over the 810’s GPU — while all of these proportional increases don’t always translate directly into benchmark results,

Qualcomm remained at the top of the graphics game on mobile through its steadfast Adreno portfolio. This year, Qualcomm’s GPU jumps 25%, making for the smallest figure they themselves shared over the years (however, I’d argue many circumventing mentioned below do make up for it). The advances in CPU performance follow a similar pattern, with the latest CPU increase settling for around 25% as well, despite the move to semi-custom cores (unclear whether the base is A72 or A73) and the 30% reduction in area enabled by the jump to 10nm, with area-efficiency greatly contributing to performance and power savings of 40%.

The fact that Qualcomm’s Snapdragon 835 might come a bit later than usual and that it might not be as big a leap as previous iterations have been, though, bring a bittersweet conclusions to current smartphone owners: their devices are slightly more future-proof, as the race for faster processors takes a short rest and picks up at a slower pace. Of course, other companies using non-Qualcomm chipsets will reap the benefit – and either catch up or get further ahead – but most OEMs are currently limited to Qualcomm chipsets for their flagship devices. In other words, their devices will have a couple of extra months before an iteration with a better processor arrives, and that increment won’t be as drastic as they have been in previous years. The HTC U Ultra is selling with a Snapdragon 821 inside it, and there’s reason to believe the LG G6 will as well — we know that devices coming at MWC are not arriving with a Snapdragon 835, or at least not going on sale with one until April (it’s rumored that Sony’s devices will indeed pack a Snapdragon 835, while being announced at MWC). I’ve confirmed with my sources and it does seem that actual production will not start until after March for many of these Snapdragon 835 flagships, with Samsung being first in line. This has the awkward consequence of making companies like HTC and LG essentially launch both of their early 2015 and 2016 with practically the same chipset — in LG’s case, its last 3 flagships in its two biggest lines will have near identical computational ability. If you are an LG V20 owner and Android enthusiast, however, you have less of a reason to upgrade and thus little reason to fret! (While I wouldn’t normally expect a V line owner to specifically go after a G line device, given the traditional differences between both, with a larger screen on the G6 both lineups could be converging very much like the Galaxy Edge and Note devices ended up satisfying a very similar set of users.) 

While the Snapdragon 821 fell behind in terms of raw CPU prowess, it kept a healthy lead in GPU performance through the sheer strength of the Andreo 530, a department which Qualcomm has yet to fully surrender to other chipset makers in the Android space, even with ARM’s Bitfrost architecture in the excellent Mali G71. If we compare the transition from 2015 to 2016, we find that many users actually had a reason to actively go out of their way and upgrade, given the thermal constraints and efficiency limitations of the Snapdragon 810, which ultimately impacted every device it resided in (some less than others, such as the still-excellent Nexus 6P) with worse performance, particularly frustrating sustained performance, uncomfortable heat and in some cases, disappointing battery life.  There is much less of a reason to upgrade to a device running a 2017 Qualcomm chipset than there was for 2015 flagship owners in 2016, that’s for sure. So if you bought a new phone in early or mid 2016 in particular, you do see a sort of additional time window to your bleeding-edge status, especially if your choice was a Q1 or Q2 HTC or LG device. The phone I believe benefits the most from this, though, is the OnePlus 3 (and to a similar extent, the OnePlus 3T).


2017 Flagship Killer? No, but closer

OnePlus has made noise with its “Never Settle” slogan since its inception, and one could argue that the OnePlus One was fully deserving of such marketing — it did pack tremendous specs for its time, at a much cheaper price than the competition. Back then, affordable flagships were just starting to emerge and to gain notoriety in the West. OnePlus managed to ride that wave and deliver a solid, affordable and powerful package that many developers and XDA users still love to this day. It’s surprising and telling how many OnePlus One users still roam our forums, how development lasted through multiple releases, and how well the phone holds up today. The OnePlus 2 was a different story, however — it was one of the worst exponents of the Snapdragon 810, with inconsistent performance, throttling and artificial workarounds that remind us of current practices. It laughably proved its own marketing slogan wrong, as the “2016 flagship killer” struggled to offer a better experience than 2014 phones.

The OnePlus 3 fixed that, and it not only offered a similar processing package to other phones of 2016, it actually arguably beat most of them by not skimping on any component and intelligently using software for an extra advantage. The OnePlus 3 came out with the Snapdragon 820 and 6GB of DDR4 RAM, whereas every other flagship from well-known manufacturers still opted for 4GB of RAM. Sure, at the time of release there was no point in having that much RAM, but software updates did give OnePlus 3 owners better RAM management, and you can still get the most out of it down the road by modifying the software. It’s a small thing, but certainly a specification that OnePlus can claim it had over 2016 flagships, and still has even over early 2017 flagships (at least the HTC U Ultra, which opted for 4GB of RAM).

Moreover, the phone still uses a combination of UFS 2.0 with F2FS on the newer builds of OxygenOS, increasing read and write speeds and impacting real-world performance in the form of better app and game-opening speeds. This is worth pointing out because not all 2016 flagships have this kind of powerful storage, and few of those that do are set up on F2FS. We’ve detailed just how big of a difference this is, and how it ties with other decisions OnePlus made into delivering an extremely speedy phone with the OnePlus 3 and OnePlus 3T.

With the Snapdragon 835 being in the situation it’s in, the OnePlus 3 (and 3T) look even more attractive on paper, and ironically enough, even more worthy of the slogans OnePlus has used to market its previous phones. While we can all agree that the OnePlus 2’s use of “2016 flagship killer” as an advertising catch-phrase was ridiculous and completely unfounded, the OnePlus 3’s processing package is made even more futureproof by the circumstances of the mobile silicon market. It topped benchmark charts at the time of release and it demonstrably outperforms most other phones in real-world scenarios — and this is running OxygenOS as OnePlus intended, instead of all the options that XDA users are accustomed to through mods, custom kernels and ROMs, governor tweaks, and much more.

In this sense, the OnePlus 3 is extremely future-proof, and the non-T variant in particular stands out as a device that sold for not only half the price of 2016 flagships, but also of many 2017 flagships while still offering the same performance, or a delta that’s smaller than years prior (once 835 flagships roll out). All 2016 phones stand to benefit from the current situation regarding smartphone processors, but the one that has the most going for it in terms of the best processing package for the longest time and for the least amount of money is, in my opinion, the original OnePlus 3.


Final Editor’s Note: I personally believe that the Snapdragon 835 is a healthy upgrade over the Snapdragon 820 and 821, and that the quality of the chipset cannot and should not be measured merely over the performance improvements announced by the chipset maker or revealed by benchmarks. Qualcomm’s chipsets in particular offer a ton of features that don’t make it into charts and spreadsheets, from the Qualcomm-enabled TouchBoost and its app opening speed tweaks to the many peripherals and useful functionality that come with the Hexagon DSP, their Aqstic codec, Quick Charge, and now support for TensorFlow for on-chip machine learning support, VR optimizations, Q-Sync and more. The 835 is also designed with power efficiency in mind, focusing on using the low-power cores for up to 80 percent of normal smartphone workloads. When it comes to raw performance and benchmark scores, though, I don’t expect the 835 to blow anyone away, and I wouldn’t be surprised given how little Qualcomm focused on performance in both the pre-briefing session and the launch event. We’ll take an integral look at the Snapdragon 835 when we can get our hands on actual devices, putting them through our performance analysis, and we’ll go beyond benchmarks to analyze and quantify its additional benefits as well.



from xda-developers http://ift.tt/2kki1ev
via IFTTT

The Chromecast Ethernet Adapter Works with Google Home

A couple of weeks after the Google Home personal assistant device was announced last year at Google I/O 2016, it was reported that Google Home would be nothing more than a Chromecast stuffed inside of a speaker. The report came from The Information, and claimed this was true because they shared the same microprocessor and WiFi chip as the Chromecast. There really isn’t that much to a Chromecast so all Google would need to do is add a speaker, microphone, LED lights and a plastic casing and boom, you have Google Home.

Then in November of last year, iFixit released their teardown of the Google Home and it was confirmed that these two devices shared similar hardware. We learned that Google Home shared the same CPU, flash, and RAM as 2015’s Chromecast. This is something that has become incredibly popular with Google selling tens of millions of units since it was first released. Google even has an Ethernet Adapter for the Chromecast that can be purchased from the Google Store for $15.

Reddit user LeonJWood was having trouble connecting their Google Home unit to the wireless network that’s available to them. It seems Google Home has difficulties connecting to 802.1x (WPA2 Enterprise) WiFi networks unless you have MAC Auth set up to automatically allow the device to connect. Naturally, this is not allowed at some work and school environments so they were forced to go through an alternative route. They were aware that the Chromecast and Google Home products shared similar hardware, so they purchased the ethernet adapter from Google to see if it would work.

And indeed it did work! All you have to do is connect the ethernet adapter to Google Home via the port in the back (which is hidden by the speaker grill) and it will work. They do warn you that anyone else on the network can see and control your Google Home too. They have also noticed that streaming music to it from their smartphone will cause it to cut out from time to time. This could be caused by other issues though so it might not be limited to this ethernet adapter.

Source: /r/GoogleHome



from xda-developers http://ift.tt/2jKzCLD
via IFTTT

Users Report that NVIDIA Shield TV Now Pairs with PlayStation and Xbox Controllers

We’re seeing reports from people within the community that PlayStation and Xbox controllers will now pair with both the 2015 and 2017 NVIDIA Shield TV devices over Bluetooth. It’s unclear if this is specific to the NVIDIA Shield TV, or if it’s something that is supported because of the new Nougat update that both of these devices have. Either way, we’re seeing reports from users that say it’s working with the PlayStation 3, PlayStation 4 and the Xbox One S controllers directly over Bluetooth.

However, you should be aware that there are steps to take in order to get it to pair properly. One person says you’ll need to connect the controller to the NVIDIA Shield TV via a USB cable at first. Once the controller is connected to the set-top box, you should then open up the Settings application on the NVIDIA Shield TV. From here, scroll down a bit and the PlayStation 3 controller should be listed with Bluetooth symbol. Once that connection has been made, you can then disconnect the controller and play a game using the controller over Bluetooth.

Another person says you’ll need to put the PlayStation 4 controller into pairing mode by pressing and holding the PS and share button until it starts to blink. From here, you’ll want to launch the Settings application on the NVIDIA Shield TV and then go to Add Accessory. You should see the PlayStation controller listed there so you can select it and complete the connection. You’ll then want to press and hold the PS button for about 10 seconds whenever you want to turn it off.

We’ve seen these controllers work with Android devices in the past, but we often had to have root access and the Sixaxis application in order to set them up properly. We’ve also seen the PlayStation 4 and Xbox One S controllers connect to Android devices with a simple OTG cable as well. But it’s nice to see these controllers are finally connecting to Android TV devices natively over Bluetooth now, this should make the NVidia Shield TV an even better device for all things gaming.

Source: /r/Android



from xda-developers http://ift.tt/2kKmSGr
via IFTTT

Rumor Reveals the Alleged Camera Specs for the Upcoming BlackBerry Mercury

We first started hearing rumors about the smartphone from BlackBerry that carried the codename Mercury back in June of last year. At the time, all we had to go on was that BlackBerry was working on three new Android devices and they carried the codenames Neon, Argon, and Mercury.

Neon and Argon have both been released since then (which we know of them now as the BlackBerry DTEK50 and the BlackBerry DTEK60. Then at the start of December rumors of actual details for the Mercury smartphone began to leak.
Such leaks suggested that it would come with a QWERTY style keyboard that BlackBerry is so well known for.

Other than a few tidbits about the device being made available on the Verizon Wireless network, we haven’t really heard too much about this upcoming smartphone since then. That is, until CES 2017 when they previewed a new smartphone with a QWERTY keyboard that we have been hearing about. Images of this smartphone showed up again in a Twitter post from the official BlackBerry Mobile account last week too.

So it seems BlackBerry currently has plans to launch this new smartphone next month at MWC 2017 in Barcelona. We know it will be manufactured by TCL (the same company who manufactured the DTEK50 and the DTEK60), but that’s about as much official information that we know right now. Interestingly enough though, a couple of new rumors claim to reveal the camera sensors that BlackBerry and TCL will be using for the upcoming smartphone.

If true, the device will be equipped with either a Samsung S5K4H8 or Omnivision OV8856 camera sensor on the front. This is an 8MP sensor with 1.12μm pixels that can shoot in 1080p at 30 frames per second. The same source has also revealed that it will be using the same camera sensor the Google Pixel uses on the back of the phone (we did a comprehensive breakdown of why this sensor is special). This is a 12MP Sony IMX378 sensor that can shoot 4K video. We’ll have to wait and see if BlackBerry’s post processing can match or beat what Google has in their Pixel phones, but the rumor suggests they’ll have the hardware to back it up.

Source 1: @rquandt Source 2: @rquandt



from xda-developers http://ift.tt/2jv4sVd
via IFTTT

Here Is How To Disable Dm-verity Warning On The OnePlus 3T

XDA Senior Member th3g1z has finally found a fix to disable dm-verity warning on the OnePlus 3T running Android 7.0. The fix doesn’t require flashing anything; you just need to execute two simple fastboot commands to get rid of the dm-verity warning. Head over to the linked thread for more details.



from xda-developers http://ift.tt/2kKoAI5
via IFTTT

samedi 28 janvier 2017

Guide: Installing and Running a GNU/Linux Environment on Any Android Device

As many of you may well be aware, the Android operating system is powered by the Linux kernel underneath. Despite the fact that both Android and GNU/Linux are powered by the same kernel, the two operating systems are vastly different and run completely different types of programs.

Sometimes, however, the applications available on Android can feel a bit limited or underwhelming, especially when compared to their desktop counterparts. Fortunately, you can get a GNU/Linux environment up and running on any Android device, rooted or non-rooted. (The following instructions assume a non-rooted device.)

For those power users on Android tablets, or other Android devices that have large screens (or can plug into a bigger screen), the ability to run desktop Linux software can go a long way towards increasing the potential that an Android device has for productivity.


Setting Up GNU/Linux on Android

To get a GNU/Linux environment set up on your Android device, you only need to install two applications from the Google Play store: GNURoot Debian and XServer XSDL. After you do that, you will only need to run a small handful of Linux commands to complete the installation.

GNURoot Debian provides a Debian Linux environment that runs within the confines of the Android application sandbox. It accomplishes this by leveraging a piece of software called proot, a userspace re-implementation of Linux’s chroot functionality, which is used to run a guest Linux environment inside of a host environment. Chroot normally requires root access to function, but by using proot you can achieve similar functionality without needing root privileges.

GNURoot comes with a built-in terminal emulator for accessing its Debian Linux environment. This is sufficient for running command-line software, however, running graphical software requires an X server to be available as well. The X Window System was designed to have separate client and server components in order to provide more flexibility (a faster, more powerful UNIX mainframe could act as the client to X server instances running on much less powerful and less sophisticated terminals).

In this case, we will use a separate application, XServer XSDL, that GNURoot applications will connect to as clients. XServer XSDL is a complete X server implementation for Android powered by SDL that has many configurable options such as display resolution, font size, different types of mouse pointer behavior, and more.


Step-by-Step Guide

1. Install GNURoot Debian and XServer XSDL from the Play Store.

2. Run GNURoot Debian. The Debian Linux environment will unpack and initialize itself, which will take a few minutes. Eventually, you will be presented with a “root” shell. Don’t get misled by this – this is actually a fake root account that is still running within the confines of the Android application sandbox.

3. Run apt-get update and apt-get upgrade to ensure you have the most up-to-date packages available on your system. Apt-get is Debian’s package management system that you will use to install software into your Debian Linux environment.

4. Once you are up-to-date, it’s time to install a graphical environment. I recommend installing LXDE as it is simple and light-weight. (Remember, you’re running Debian with all the overhead of the Android operating system in the background, so it’s best to conserve as many resources as you can.) You can either do apt-get install lxde to install the desktop environment along with a full set of tools, or apt-get install lxde-core to only install the desktop environment itself.

5. Now that we have LXDE installed, let’s install a few more things to complete our Linux setup.

XTerm – this provides access to the terminal while in a graphical environment
Synaptic Package Manager – a graphical front-end to apt-get
Pulseaudio – provides drivers for playing back audio

Run apt-get install xterm synaptic pulseaudio to install these utilities.

6. Finally, let’s get the graphical environment up and running. Start XServer XSDL and have it download the additional fonts. Eventually you will get to a blue screen with some white text – this means that the X server is running and waiting for a client to connect. Switch back to GNURoot and run the following two commands:

export DISPLAY=:0 PULSE_SERVER=tcp:127.0.0.1:4712
startlxde &

Then, switch to XServer XSDL and watch the LXDE desktop come up onto your screen.

I recommend putting the above two commands into a shell script so that you can easily restart LXDE if you close the session or if you need to restart your device.


Installing Linux Applications

Congrats! You’ve successfully gotten Debian Linux up and running on your Android device, but what good is running Linux without apps? Fortunately, you’ve got a massive repository of Linux applications at your fingertips just waiting to be downloaded. We’ll use the Synaptic Package Manager, which we installed earlier, to access this repository.

Click the “start” button at the lower-left hand corner, click Run, and then type synaptic. The Synaptic Package Manager will load. From here, simply press the Search button at the top and then type the name of the application you’d like to install. Once you’ve found an application, right click it and select “Mark for Installation”. When you are finished marking packages, click the Apply button at the top to start the installation. Uninstalling packages follows the same procedure, except by right-clicking and selecting “Mark for Removal” instead.

Of course, since this isn’t a real Linux installation but rather a Linux environment running on top of, and within the constraints of, Android, there are a couple of limitations to be aware of. Some applications will refuse to run or will crash, usually due to the fact that some resources that are usually exposed on GNU/Linux systems are kept hidden by Android. Also, if a regular Android app can’t do something, then usually a Linux application running within Android can’t as well, so you won’t be able to perform tasks such as partitioning hard drives. Lastly, games requiring hardware acceleration will not work. Most standard everyday apps, however, will run just fine. Some examples include Firefox, LibreOffice, GIMP, Eclipse, and simple games like PySol.


I hope that you find this tutorial useful. While I personally performed these steps on my Google Pixel C, you can do this on most Android devices. Preferably on a tablet device with access to keyboard and mouse peripherals, of course. If you already run a GNU/Linux distribution on your Android device, let us know what you are using it for below!



from xda-developers http://ift.tt/2jCgAqZ
via IFTTT

Rovo89: Update on Development of Xposed for Nougat

New Leak Shows the LG Watch Style In Silver and Rose Gold, may Start at $249

We’ve known for a while now that LG is likely working on two new Android Wear watches allegedly called the LG Watch Style and the Watch Sport. While we have already seen images of both the LG Watch Style and Watch Sport, those were unfortunately quite low resolution renders. Now, a new leak from  Evan Blass (@evleaks) has gives us a clearer picture (literally) of what LG’s upcoming Android Wear smartwatch, the Watch Style, may look like.

The image of the LG Watch Style shared by Evan Blass match those leaked by TechnoBuffalo a few day ago.

As you can see in the image, the renders show the LG Watch Style in both silver and rose gold colors with sporting leather straps. In terms of design, the Watch Style appears to be a classic fashion watch unlike its counterpart, the Watch Sport, which is said to be the bigger of two watches and will likely feature heart rate sensor, GPS, and cellular connectivity.

The LG Watch Style and LG Watch Sport are expected to launch at Google’s platform event on February 9 where Google is expected to detail their much awaited Android Wear 2.0 update as well. Both watches are said to be manufactured by LG and Google in a Nexus-style collaboration, meaning the hardware will be handled by LG with Google providing the software and any future updates.

If previous rumors are to believed, the LG Watch Style will feature a 1.2″ 360×360 resolution AMOLED screen, 240mAh battery, 512 MBs of RAM and will have Bluetooth (no cellular radio) for connectivity. Furthermore, according to a source speaking with AndroidPolice, the Watch Style will launch at a price point of $249. Although for more concrete details, we will have to wait for the official announcement.


Source: @evleaks Source: AndroidPolice



from xda-developers http://ift.tt/2jIAJcu
via IFTTT

A Guide to Editing RAW Photography — Get the Most out of Your Smartphone’s Camera

­­

After exploring the RAW capabilities of my OnePlus 3T and Sony NEX-5 cameras, an array of readers responded with questions and comments on RAW photography and their experiences. Many expressed the desire to better learn how to edit photography and particularly how to deal with RAW file formats on both mobile devices and desktop operating systems, and I was thrilled to see such a willingness to engage in something new like RAW photography. I was also deeply happy to have several readers relate to me that I had inspired them to explore photography in general once again or even for the first time –it can come as a surprise to many that the device in their pockets is often their best choice for exploring. In light of these discoveries, my hope is that some assistance for those struggling to begin will continue to encourage those interested in photography, RAW or not, to persevere.

Remembering back to my first forays into photography and editing, I was lucky enough to ease into the prospect bit by bit, beginning with something as simple as the built-in editor in my HTC Incredible 2’s gallery app. If I am remembering correctly, I stumbled upon Adobe Lightroom as an app for my iPad 3, which became my go-to editing device until I built my first desktop PC. Over the course of a month or so, I essentially explored each slider and option until I was relatively familiar with the program. I can easily recommend this to anyone with a lot of patience and curiosity, as you will inevitably find your own preferences along the way while also learning to use a powerful editing suite independently.

Nevertheless, having someone to guide you through the very first steps of editing and break down the menacing façade that Lightroom and other editors can present the user is of course extremely useful. I will attempt to be that guide!


First Steps

As several curious and intrepid readers soon discovered, shooting in RAW is not necessarily the most intuitive experience, especially once one goes to find or edit the RAW format files they have produced. As RAW files, especially DNGs, are innately not images straight out of camera, nearly all gallery apps simply will not register that they exist, both on mobile and desktop operating systems. This is not a criticism of gallery apps, but rather an unavoidable reality of RAW formats. As such, you will want to either install one of a handful of free RAW file managers, or bite the bullet and pay for something like Photo Mate R3 (~$8). Adobe Lightroom for mobile devices is likely your absolute best option, being free and well-designed.

For those of you looking for something a bit different, Photo Mate R3 is a fully-fledged mobile editor with almost all of the granular controls that Lightroom and other desktop editors offer. It also provides a gallery function with an array of sorting options, allowing the viewer to, say, selectively view only RAW format images and preview their thumbnails. The only major downside I noted is a lack of granular noise reduction controls of the sort that Lightroom offers. RAW files express all the noise the camera generates (a lot) and can appear rather off-putting if one does not first consider that lossy formats like JPEGs include some often heavy-handed noise reduction that occurs as the RAW data is converted and compressed. RAW lets you decide how much noise reduction is needed, potentially preventing the overly-soft images that smartphone cameras are often infamous for.

If you have access to a computer, there are numerous free options for editing RAW photography like GIMP and Rawtherapee. Rawtherapee offers a genuinely impressive program that is solely dedicated to editing RAW format images and is easy to recommend. There is also Google’s free Nik editing suite, which offers a dedicated program for noise reduction to assist those on a budget who can’t stand noise but would prefer to keep their editing workflow as mobile as possible.

A brief glance at Rawtherapee 5.0’s interface (Rawtherapee).

For those of you willing to fork over the cash, however, my one true photo editing love has always been Adobe Lightroom. It may be an irrational attachment to the program I am simply most familiar with, but I find that it offers a wonderful, intuitive interface and an almost invaluable organizational aspect that allows you to comfortably back up a database of around 40+ GB of edited photos while still retaining exact change histories and the original files. While next to nothing compared to professional photographers or very serious amateurs, I’ve taken and edited thousands of photos in the 5 years I’ve been active, and have a history of almost every single one in my Lightroom library.

A small snippet of my primary Lightroom catalog. My edited photos can be found at my Flickr and VSCO accounts.

While verifying that my understanding of Adobe Lightroom mobile was accurate, I discovered that free users can in fact edit RAW formats without a CC subscription! While the free version loses a number of features, it is still well-featured and includes several noise reduction filters, albeit without the ability to control it (aside from picking low, medium, and high reduction options). Like Photo Mate R3, the Lightroom app offers a useful gallery feature that lets you preview RAW thumbnails and filter out non-RAW images. This app is definitely my recommendation for those looking for a slick, user-friendly solution. While experienced users may find some improved utility in Photo Mate R3’s broader range of options, Lightroom will be more than enough for most mobile editors. This article provides a great overview of the app and its RAW editing features.


General Tips and Suggestions for Editing Photography

While providing granular tutorials for each of the applications mentioned above is a bit beyond the scope of this article, what I can do is explain some of the more common options you will have at your disposal, regardless of which one you choose to adopt. I will be using the desktop version of Adobe Lightroom (5.4) to demonstrate these features. After the process of finding your RAW files (usually .DNGs for mobile devices) and importing them into your app of choice, you will be presented with several options. Generally speaking, these options will be intended to modify the tone (exposure/lighting), white balance, and color in your photos.

Some of the most useful and intuitive methods of editing in Lightroom are relatively unique to it and even then only in the desktop app. My favorite ways to modify a photo’s tone are through the histogram (the graph at the top of the screenshot below), which allows you to click on one of five sections (blacks, shadows, exposure, whites, highlights) and drag them left or right to reduce or increase the prevalence of that specific light type. The tone curve, found below the Basic section, can also be dragged about in a similar fashion, but is generally only needed for slightly modifying a nearly-complete image or recovering detail in an image that was drastically over- or underexposed. This can all generally also be done with the sliders you can see on the right, but this takes somewhat longer and is also not nearly as fun! A great exploration of the utility of histograms and how to read them can be found here.

Two images and their related histograms.

Traveling down the options in the menu pictured below, we begin with ‘WB’ or white balance. This is used to improve accuracy of the color representation in photos, accomplished by modifying the temperature and tint in order to direct the picture towards your preferred outcome, which may include fixing imperfect white balancing in camera. In desktop and mobile Lightroom, you have the option of selecting the eye dropper, which effectively auto-corrects white balance once you direct it to a point on your photo that you know should be a neutral grey or white.

Tone settings come next, beginning with options for exposure and contrast. Exposure modifies the global brightness unselectively. Contrast further darkens darker areas of the image and brightens lighter areas. After these more heavy-handed options, there are more precise controls that can also be controlled through the histogram on top, as I previously explained. The highlights slider will modify only the brightest sections of the image, allowing you to tame overexposed images (you may have seen or heard the term “blown highlights”). Shadows, on the opposite hand, can help recover lost detail in dark areas of images. Lastly, Whites and Blacks intuitively allow pixels leaning towards white or black to be made brighter or darker. Attentive readers may notice a theme so far of combinations of controls that offer large changes (whites, blacks) with controls that offer more detailed modifications to smaller parts of the image (highlights, shadows).

Continuing this trend, Clarity is effectively a method of only adding contrast to mid-tones (mid meaning middle of the histogram). In doing so, the Clarity slider can give the benefit of added contrast while preventing the noise or grain (and often an uglier image) that can come overuse of the global Contrast slider. This option is generally unique to Lightroom, but it can be partially replicated by experimenting with white and black levels (increased contrast would mean darker blacks and brighter whites). This method won’t add edge detail like Clarity, but it will more subtly add contrast.

Saturation and Vibrance are the last basic settings one may frequently want to use. Saturation is the color equivalent of Exposure, allowing the user to globally deepen or lighten all colors in an image. Vibrance helps to avoid the downfall of global saturation changes by only adjusting the least (+) or most (-) saturated colors.

Finally, there are several more complex and granular settings that can be found in Lightroom and other desktop editing suites. Something I often find myself using is detailed saturation, hue, and luminance control (on the right), giving me the ability to, say, recover oversaturated blues or greens, or better express the yellows and oranges in a sunset photo with subpar white balance. The Detail section (on the left) is where noise reduction and sharpening settings can be found, very useful options to have when editing RAW files. Lightroom helpfully provides a small window with a highly magnified view, which makes it considerably easier to avoid introducing ugly artifacts or obscuring detail when modifying sharpness and adding noise reduction.

                     


Practice, Practice, and More Practice!

As a tried-and-true trope of many a guide, my best suggestion for those just beginning to stretch their photography-editing legs is to not give up and keep trying. Mistakes will be made and modifications will be overdone, but in time you will begin to develop a more instinctive understanding of editing and likely come into a style and workflow of your own. Mine has taken many years to develop and I clearly remember struggling at first, as well as taking a look at photos I’d edited years ago only to be aghast at the aesthetic decisions of past-me. I’m still learning more than 5 years in, and I even managed to learn a couple new things about editing photos in the process of writing this. In all its breadth, photography is essentially an activity with constant opportunity for learning, and rather than being daunting, it simply makes it that much more exciting and rewarding.

Amidst the humbling response my previous article received, multiple readers shared some of their own impressive smartphone photography and blew me away. If you have taken any photos with your phone that you are proud and would like to share, feel free to post them in the comments below this article, as well as on its corresponding Facebook posts or tweets. An upcoming article in this series will include a collection of user-submitted photography, so don’t miss out!

Also ahead will be a brief tutorial on how to use the manual mode available on many modern smartphone cameras in order to best take advantage of their capabilities. 



from xda-developers http://ift.tt/2kyrDQr
via IFTTT

Moto G5 Passes Through the FCC, Likely to be Unveiled at MWC 2017

As we inch closer to one of the hottest events globally for smartphones in the form of the Mobile World Congress 2017, we witness some more phones getting leaked on the way. This time, we get new information on the Moto G5, which has passed through the FCC.

The FCC filing does not reveal a whole lot of spec info on the Moto G5, but it does let us know that the device will be coming with a 3,000 mAh battery. This phone will also support a form of quick charging, likely called Turbo Charging based on past naming conventions. This is inferred from the adapter specifications listed, as the included charging adapter is capable of outputting 14.4W at 9V/1.6A, 12V/1.2A and 5V/1.6A. This is a nice change for the non-Plus variant, as only the Moto G4 Plus included the Turbo Charger in the box while the Moto G4 came with a puny 5V/0.55A charging brick.

The other notable point in the FCC filing is the inclusion of NFC. Previously, the main Moto G4 lineup still did not come with NFC capabilities, although the “Play” variants did sport them. Adding NFC to the base model implies that all the other variants will possess this.

Motorola does have an event planned for Mobile World Congress on 26th February 2017, where the Moto G5 and the Moto G5 Plus are likely to be unveiled. As for specs, the Moto G5 and G5 Plus are likely to stick with a 5.5″ display and switch to the Qualcomm Snapdragon 625 SoC. Leaked images of the G5 Plus have been floating around, but we will have to wait on for more concrete information.

What are your thoughts on the Motorola Moto G5 and Moto G5 Plus so far? Let us know in the comments below!

Source: FCC Via: MotoG3.com



from xda-developers http://ift.tt/2kdRQpJ
via IFTTT

vendredi 27 janvier 2017

AutoVoice Integration Finally makes its way to Google Home, Here’s how to Use It

After a month in Google’s approval limbo, AutoVoice has finally been approved for use as a third-party integration in Google Home. With AutoVoice integration, you can send commands to your phone that Tasker will be able to react to, allowing you to perform countless number of automation scripts straight from your voice.

Previously, this required a convoluted workaround involving IFTTT sending commands to your device via Join, but now you can send natural language commands straight to your device. We at XDA have been awaiting this release, and now that it’s here, we’ll show you how to use it.


The True Power of Google Home has been Unlocked

The above video was made by the developer of AutoVoice, Joao Dias, prior to the approval of the AutoVoice integration. I am re-linking it here only to demonstrate the possibilities of this integration, which is something we can all now enjoy since Google has finally rolled out AutoVoice support for everyone. As with any Tasker plug-in, there is a bit of a learning curve involved, so even though the integration has been available since last night, many people have been confused as to how to make it work. I’ve been playing with this since last night and will show you how to make your own AutoVoice commands trigger through speaking with Google Home.

A request from Joao Dias, developer of AutoVoice: Please be aware that today is the first day that AutoVoice integration with Google Home is live for all users. As such, there may be some bugs that have yet to be stamped out. Rest assured that he is hard at work fixing anything he comes across before the AutoVoice/Home integration is released to the stable channel of AutoVoice in the Play Store.


Getting Started

There are a few things you need to have before you can take advantage of this new integration. The first, and most obvious requirement, is the fact that you need a Google Home device. If you don’t have one yet, they are available in the Google Store among other retailers. Amazon Alexa support is pending approval as well, so if you have one of those you will have to wait before you can try out this integration.

Once you have each of these applications installed, it’s time to get to work. The first thing you will need to do is enable the AutoVoice integration in the Google Home app. Open up the Google Home app and then tap on the Remote/TV icon in the top right-hand corner. This will open up the Devices page where it lists your currently connected cast-enabled devices (including your Google Home). Tap on the three-dot menu icon to open up the settings page for your Google Home. Under “Google Assistant settings” tap on “More.” Finally, under the listed Google Home integration sections, tap on “Services” to bring up the list of available third-party services. Scroll down to find “AutoVoice” in the list, and in the about page for the integration you will find the link to enable the integration.

Once you have enabled this integration, you can now start talking to AutoVoice through your Google Home! Check if it is enabled by saying either “Ok Google, ask auto voice to say hello” or “Ok Google, let me speak to auto voice.” If your Google Home responds with “sure, here’s auto voice” and then enters the AutoVoice command prompt, the integration is working. Now we can set up AutoVoice to recognize our commands.


Setting up AutoVoice

For the sake of this tutorial, we will make a simple Tasker script to help you locate your phone. By saying any natural variation of “find my phone”, Tasker will start playing a loud beeping noise so you can quickly discern where you left your device. Of course, you can easily make this more complex by perhaps locating your device via GPS then sending yourself an e-mail with a picture taken by the camera attached to it, but the part we will focus on is simply teaching you how to get Tasker to recognize your Google Home voice commands. Using your voice, there are two ways you can issue commands to Tasker via Google Home.

The first is by speaking your command exactly as you set it up. That means there is absolutely no room for error in your command. If you, for instance, want to locate your device and you set up Tasker to recognize when you say “find my phone” then you must exactly say “find my phone” to your Google Home (without any other words spliced in or placed at the beginning or end) otherwise Tasker will fail to recognize the command. The only way around this is to come up with as many possible variations of the command as you can think of, such as “find my device”, “locate my phone”, “locate my device” and hope that you remember to say at least one variant of the command you set up. In other words, this first method suffers from the exact same problem as setting up Tasker integration via IFTTT: it is wildly inflexible with your language.

The second, and my preferred method, is using Natural Language. Natural Language commands allow you to speak naturally to your device, and Tasker will still be able to recognize what you are saying. For instance, if I were to say something much longer like “Ok Google, can you ask auto voice to please locate my device as soon as possible” it will still recognize my command even though I threw in the superfluous “please” and “as soon as possible” into my spoken command. This is all possible thanks to the power of API.AI, which is what AutoVoice checks your voice command against to interpret what you meant to say and return with any variables you might have set up.

Sounds great! You are probably more interested in the second option, as I was. Unfortunately, the Natural Language commands are taxing on Mr. Dias’s servers so you will be required to sign up for a $0.99 per month subscription service in order to use Natural Language commands. It is a bit of a downer that this is required, but the fee is more than fair considering how low it costs and how powerful and useful it will make your Google Home.

Important: if you want to speak “natural language commands” to your Google Home device, then you will need to follow these next steps. Otherwise, skip to creating your commands below.


Setting up Natural Language Commands

Since AutoVoice relies on API.AI for its natural language processing, we will need to set up an API.AI account. Go to the website and click “sign up free” to make a free account. Once you are in your development console, create a new agent and name it AutoVoice. Make the agent private and click save to create the agent. After you save the agent, it will appear in the left sidebar under the main API.AI logo.

Once you have created your API.AI account, you will need to get your access tokens for AutoVoice can connect to your account. Click on the gear icon next to your newly created agent to bring up the settings page for your AutoVoice agent.

Under “API keys” you will see your client access token and your developer access token. You will need to save both. On your device, open up AutoVoice beta. Click on “Natural Language” to open up the settings page and then click on “Setup Natural Language.” Now enter the two tokens into the given text boxes.

Now AutoVoice will be able to send and receive commands from API.AI. However, this functionality is restricted until you subscribe to AutoVoice. Go back to the Natural Language settings page and click on “Commands.” Right now, the command list should be empty save for a single command called “Default Fallback Intent.” (Note in my screenshot, I have set up a few of my own already). At the bottom, you will notice a toggle called “Use for Google Assistant/Alexa.” If you enable this toggle you will be prompted to subscribe to AutoVoice. Accept the subscription if you wish to use Natural Language commands.


Creating Tasker Profiles to react to Natural Language Commands

Open up Tasker and click on the “+” button in the bottom right hand corner to create a new profile. Click on “Event” to create a new Event Context. An Event Context is a trigger that is only fired once when the context is recognized – in this case, we will be creating an Event linked to an AutoVoice Natural Language Command. In the Event category, browse to Plugin –> AutoVoice –> Natural Language.

Click on the pencil icon to enter the configuration page to create an AutoVoice Natural Language Command. Click on “Create New Command” to build an AutoVoice Command. In the dialog box that shows you, you will see a text input place to input your command as well as another text entry spot to enter the response you want Google Home to say. Type or speak the commands you want AutoVoice to recognize. While it is not required for you to list every possible variant of the command you want it to recognize, list at least a few just in case.


Pro-tip: you can create variables out of your input commands by long-pressing on one of the words. In the pop-up that shows up, you will see a “Create Variable” option alongside the usual Cut/Copy/Select/Paste options. If you select this, you will be able to pass this particular word as a variable to API.AI, which can be returned through API.AI. This can be useful for when you want Google Home to respond with variable responses.

For instance, if you build a command saying “play songs by $artist” then you can have the response return the name of the artist that is set in your variable. So you can say “play songs by Muse” or “play songs by Radiohead” under the same command, and your Google Home will respond with the same band/artist name you mentioned in your command. My tutorial below does not make use of this feature as it is reserved for more advanced use cases.


Once you are done building your command, click finished. You will see a dialog box pop up asking for what you want to name the natural language command. Name it something descriptive. By default it names the command after the first command you entered, which should be sufficient.

Next, it will ask you what action you want to set. This allows you to customize what command is send to your device, and it will be stored in %avaction. For instance, if you set the action to be “findmydevice the text “findmydevice” will be stored in the %avaction variable. This won’t serve any purpose for our tutorial, but in later tutorials where we cover more advanced commands, we will make use of this.

Exit out of the command creation screen by clicking on the checkmark up top, as you are now finished building and saving your natural language command. Now, we will create the Task that will fire off when the Natural Language Command is recognized. When you go back to Tasker’s main screen, you will see the “new task” creation popup. Click on “new task” to create a new task. Click on the “+” icon to add your first Action to this Task. Under Audio, click on “Media Volume.” Set the Level to 15. Go back to the Task editing screen and you will see your first action in the list. Now create another Action but this time click on “Alert” and select “Beep.” Set the Duration to 10,000ms and set the Amplitude to 100%.

If you did the above correctly, you should have the following two Actions in the Task list.

Exit out of the Task creation screen and you are done. Now you can test your creation! Simply say “Ok Google, ask auto voice to find my phone” or any natural variation of that that comes to mind and your phone should start loudly beeping for 10 seconds. The only required thing you have to say is the trigger to make Google Home start AutoVoice – the “Ok Google, ask auto voice” or “Ok Google, let me speak to auto voice” part. Anything you say afterwards can be as freely flowing and natural as you like, the magic of API.AI makes it so that you can be flexible with your language!

Once you start creating lots of Natural Language Commands, it may be cumbersome to edit all of them from Tasker. Fortunately, you can edit them straight from the AutoVoice app. Open AutoVoice and click on “Natural Language” to bring up its settings. Under Commands, you should now see the Natural Language command we just made! If you click on it, you can edit nearly every single aspect of the command (and even set variables).


Creating Tasker Profiles to react to non-Natural Language Commands

In case you don’t want to subscribe to AutoVoice, you can still create a similar command as above, but it will require you to list every possible combination of phrases you can think of to trigger the task. The biggest different between this setup is that when you are creating the Event Context you must select AutoVoice Recognized rather than AutoVoice Natural Language. You will build your command list and responses in a similar manner, but API.AI will not handle any part of parsing your spoken commands so you must be 100% accurate in speaking one of these phrases. Of course, you will still have access to editing any of these commands much like you could with Natural Language.

Otherwise, building the linked Task is the same as above. The only thing that differs is how the Task is triggered. With Natural Language, you can speak more freely. Without Natural Language, you have to be very careful how you word your command.


Conclusion

I hope you now understand how to integrate AutoVoice with Google Home. For any Tasker newbies out there, getting around the Tasker learning curve may still pose a problem. But if you have any experience with Tasker, this tutorial should serve as a nice starting point to get you to create your own Google Home commands. Alternatively, you can view Mr. Dias’ tutorial in video form here.

In my limited time with the Google Home, I have come up with about a dozen fairly useful creations. In future articles, I will show you how to make some pretty cool Google Home commands such as turning on/off your PS4 by voice, reading all of your notifications, reading your last text message, and more. I won’t spoil what I have in store, but I hope that this tutorial excites you for what will be coming!



from xda-developers http://ift.tt/2kCU2rs
via IFTTT