AndroidTO 2016 Recap

Now in its 7th year, the AndroidTO conference was held at the beginning of November. Despite being an Android-focused event, there were a few, general mobile talks that were nevertheless quite interesting. Here's my rundown of the sessions I attended:

3 Ways to Add Machine Learning to Your App

Yufeng Guo (@yufengg), a Google Developer Advocate for the Cloud Platform, spoke about Google's machine learning initiatives.

This DevFest season, Google has really been spreading the word about ML and TensorFlow. I heard two different talks at DevFestDC on the subjects already, so was curious to hear another Googler's perspectives.

Yufeng spoke about the challenges of training and prediction in a mobile context: a lot more computational horsepower is required for the former, so even today's smartphone processors are no match for hundreds of networked CPUs. For each combination of dimension (pre-trained vs. custom, and, local vs. cloud prediction), Google offers a way to use ML in your apps:

  • The Mobile Vision API gives you a pre-trained model with processing done on mobile. It offers fast, face detection (not recognition), returning image coordinates of facial features, and expression probabilities (e.g. smiling, joyfulness).
  • The Cloud Vision API is another pre-trained model, but offers much more powerful image analysis. It can take arbitrary photos and label them with descriptions across a variety of domains, including logos and landmarks. In addition, In addition, there is a Speech API for converting audio to text, and a Natural Language API for analyzing the structure and meaning of text.
  • For all other custom applications, there's TensorFlow, whether it's used on mobile or cloud.

Yufeng even did a few live demos involving the audience:

Supercharging Your Android Release with Fastlane

Andrea Falcone (@asfalcone) gave the same talk I attended at Droidcon NYC.

Animations to Guide Us All

(Slides)

Marcos Paulo Damasceno (@marcospaulosd) walked us through a series of progressively fancier activity transitions in a sample app he built.

Some things I learned about activity transitions:

  • Get started by calling getWindow().setEnterTransition(), with one of the predefined transitions in the android.transition package
  • After a shared element transition, to have other views in the target activity slide in, create a Slide transition, then add views to it that should be animated. Group these transitions into a TransitionSet and set them on the window.
  • Call setTransitionGroup(false) to tell Android that a ViewGroup should not be treated as a unit, and that its children should be animated individually.
  • After an INVISIBLE view moves to its final position, use a shared element callback to add a reveal effect, making the view VISIBLE and then starting the animation.

All of the above ideas are demonstrated in a slick demo app that Marcos wrote: https://github.com/marcospaulo/cinema_example

Machine Learning Tools

Vihan Jain (LinkedIn), a Google employee, gave (yet) another overview of the company's machine learning tools.

He described TensorFlow as a way to perform computations defined as a DAG to optimize an objective function. These computations are defined in a high-level language, compiled, and optimized in the cloud. A lower-level CPU, e.g. mobile or GPU, can execute part or all of the graph, and push data through it. He suggested we look at a code lab called TensorFlow for Poets which introduces you to the idea of transfer learning, where an existing trained model can be re-trained on a similar problem.

To build something from scratch, Google uses the phrase Wide and Deep Learning to describe their techniques for building recommender systems, search and ranking systems.

Give Your App a Boost in Quality with Firebase

Doug Stevenson (@codingdoug), a familiar face from all of his Firebase advocacy in videos and articles, gave a lightning talk about how to use Firebase to improve app quality.

I jotted down a few notes on my phone:

Dealing with Legacy Code

Nathalie Simons (LinkedIn) from TribalScale spoke about how to deal with old, inherited codebases. This was a generally applicable talk, not dealing directly with Android.

Legacy code often comes with the baggage of bugs, unknown behaviour and/or unknown design decisions. People are forgetful and don't remember the what and why of code they wrote.

Nathalie outlined three approaches if you find yourself working with another company's code:

  • Work with what you have: study existing code to figure out how it is currently working, without trying to clean it up. Make use of logging and breakpoints with the goal of understanding.
  • Start from scratch: in addition to studying the existing code, rewrite the code. Put yourself in the previous developers' shoes: they had reasons for what they did. Figure out why, before judging. This technique may be risky, because you don't know what the original business requirements were.
  • Take what you need: after studying the legacy code, cherry-pick what you need onto a clean slate. Introduce portions of old code incrementally, so you know where to place blame when something breaks.

Building Integrated Medical Apps

(Slides)

James Agnew (@jamesagnew) from the Centre for eHealth Global Innovation spoke about his team's challenges in developing healthcare apps.

To start, James presented an example of rethinking healthcare from a 2010 article in WIRED about redesigning blood test results. The designs were turned into a proof-of-concept by the Boston Children's Hospital.

The Centre's mandate is to develop apps for patients to manage chronic conditions, so they can be active participants in their own health. Such apps help patients follow their care plan (measuring weight or blood sugar, reminders to follow a morning routine), help doctors manage patients' health (uploading data to a doctor), make sense of collected data, and accumulate data for research.

As one can imagine, healthcare apps have to deal with a number of standards.

  • Bluetooth often suffers from flaky connections, and the lack of standard profiles for medical applications (like a thermometer or blood pressure monitor).
  • The underlying premise of most electronic health record systems is replicating data everywhere, which doesn't match the intermittent connectivity and limited storage of smartphones.
  • FHIR is a modern initiative to develop a data format and API for exchanging health records. There is an open-source reference implementation in Java available at http://hapifhir.io/ including an Android library. And, at a higher level, the SMART Health IT platform defines standards for profile and authorization for healthcare apps, and an app gallery.

Looking to the future, James mentioned a number of ongoing initiatives:

  • the idea of seeing your health records curated like a Facebook news feed
  • using smartphones to perform research at scale (Ed.: I immediately thought of Apple's ResearchKit). Sync For Science is a US collaboration which allows patients to share their electronic health data with research apps
  • an artificial pancreas that uses a smartphone as a controller, e.g. TypeZero
Eric Fung

Eric Fung