Google I/O 2012 – Making Android Apps Accessible

Google I/O 2012 – Making Android Apps Accessible


T. V. RAMAN: Hello, everyone. Thank you for coming to this
session on Android accessibility. I have three of my colleagues
with me. I am T. V. Raman. I lead our work on
Android access. With me, I have Alan Viverette,
Charles Chen, and Peter Lundblad. And what we are talking to you
about is accessibility on the Android platform in terms of
what Android does for you with respect to increasing the
breadth of users that you can cover with your applications. So you heard talks all day today
in terms of how Android does a lot of heavy lifting
for you so that you, as an application developer, can
focus on the logic and functionality of your
applications. So today you don’t need to worry
about, am I writing for a small phone, big
phone, tablet? Android does all of
that work for you. What this talk is about is
giving you yet another perspective on the huge variety
of devices and users that you address when you build
an Android application. I use speech for all my work. I cannot see. My colleague, Peter, uses
Braille for all his work. And what we are showing you is
APIs in the platform that actually allow us to build
accessibility services that let us use all of the
applications you guys are building in manners that you
would never have thought we would use it. So that’s really our motivation
here with Android accessibility, is to increase
the reach of these applications to users with
different needs, with special needs. We’ve come a long way since
our access work on Android started in 2008. A quick history to offer you in
terms of how we’ve evolved and what the platform does. So our access work actually
started in Android 1.6. 1.5, we released the
text to speech API. And 1.6, basically, allowed
users to navigate using system focus. And anything that you navigated
to would speak. And you could use a lot of
simple applications that way. We have since come a long way. And in Ice Cream Sandwich, at
the end of last year, we released a feature we called
touch exploration, which then allows me, as a blind user, to
touch different parts of the screen and hear what
is under my finger. This basically then completely
opens up all aspects of the Android user interface
to a blind user. Because now, with touch
exploration, you could actually read everything on the
screen, not just things that were focusable. And that was a huge
step for us. But what we had as a gap in ICS,
which we are now covering today, and we’ll talk about
this in greater detail, is the next step. So that was the short
history lesson. Let’s talk about Jelly Bean. So there are significant access
API enhancements in Jelly Bean. Some of you have heard a quick
overview of this during Romain Guy’s talk this morning. We’ve done a series of things
that actually enable both spoken feedback access for
blind users as well as Braille, which Peter will
show you in a bit. But at a high level, here are
the three things we are introducing today. We have introduced the notion
of accessibility focus in Android, which Peter will talk
about in detail, that allows blind users to reliably step
through the interface. So accessibility focus is
something that can be put on any part of the user interface
and, as a result, what is there will speak. We have also introduced this
notion we call accessibility actions that you can then
trigger from accessibility services, like TalkBack
and BrailleBack, which Peter will show you. What those actions let us do is
we can then perform clicks and selections and things like
that programmatically, which then allows us to hook up
esoteric devices like Braille displays and other keyboards and
whatever you can imagine. And finally, we have introduced
a set of gestures– this wasn’t mentioned during
the keynote today– that allow us to then bring
together accessibility focus and access actions to allow a
blind user to very effectively use the Android user
interface. And last but not least the other
major enhancement in Jelly Bean is the coming
of age of Chrome. And the Chrome browser
is completely accessible in Jelly Bean. And it all works seamlessly,
as you shall see during the various demos we have for you. So with that I’ll hand off to
Peter, who’s the lead engineer behind BrailleBack, which
is the Braille enablement of the platform. So BrailleBack is an
accessibility service that uses these same APIs that I’ve
been talking about to basically provide a Braille
interface on Android. So Peter, go for it. PETER LUNDBLAD: Thank
you, Raman. Hi, everyone. I’m Peter Lundblad. And I’m going to, as Raman said,
talk about a few things. I’m going to start talking about
one of the things that we’ve added in Jelly Bean
that enables things like BrailleBack and other
things for the user. And this is called accessibility
focus. It allows a user to interact
with anything on the screen, like any view. It behaves similar to a cursor,
in the sense that you can move it around
on the screen. It is also similar
to input focus. But the difference is, as I
said, that it can interact with all views, which
is great. Because input focus can only
read views that are meant to, for example, take input. So a good example of that is an
edit text or a push back on that you can press Enter on. And then it will activate
the button. But for a blind user, you want
to be able to read anything on the screen, including static
text and similar, which is normally not focused
by input focus. So therefore, we added this
accessibility focus that is shown on screen as a
yellow rectangular. Accessibility focus can be
placed anywhere by the accessibility service. And this allows multiple
modes of interaction. It’s handled by the system,
which means that there is one global source of truth. But again, Accessibility
Services get notified when it moves. And they also have
control over it. So what are examples of these
different modes of operation and interaction by the user? How can we move accessibility
focus? One example, as Alan
is showing, is by swiping on the screen. Alan will actually show you
later on more about these swipe gestures that
we’ve added. Another example is that the
touch exploration that was already added in Ice Cream
Sandwich now moves accessibility focus. We are also able to move
accessibility focus by accessibility services using,
for example, a Braille display, which I’m going
to show you in a bit. The great thing is that, when
one accessibility service moves it, the accessibility
focus, it gets broadcast to all accessibility services. So, for example, if you have
TalkBack and BrailleBack running on the system at
the same time, they get synchronized. And the user knows where he is
by both Braille and speech. Now, being able to only
read what’s on the screen, is great. But life is pretty boring
if you can not interact with anything. So that moves us over to
the next part, which is accessibility actions. Accessibility actions allows a
user to interact with nodes in different ways. And that’s very important. Because, if we now have the
different ways of moving around a focus, then you don’t
really know where on the screen the different
views are. So therefore, you also need to
actually do things with them. So what accessibility actions
do we support? The obvious thing is moving
accessibility focus, as I already mentioned a few times. We can also move input focus,
which is useful. Because then we can, at will,
as an accessibility service, synchronize the two focuses when
that’s good for the user. We can click on views, which,
of course, is the most used action so that we can activate
buttons, activate input fields, and so on. Another important thing
is that we can scroll on the screen. So if the view has more
than one page, we can move between them. That’s already possible using
swipe gestures on the screen. But a problem for blind users
is that it’s hard to scroll exactly by one page. And then, if you don’t do that
correctly, then you lose where you are in the list books,
for example. So the new actions allow us
to scroll discreetly. There is also actions to
move inside of text. So if you have a text field with
lots of text, maybe you want to, as a user, read it by character, by word, or paragraph. Or there is actions
for handling moving inside web views. And all these are called
movement by different granularities. We also added something,
which I’m not going to talk much about. It’s called global actions. That allow things like
activating the home screen, and pressing the back
button, and going to notification, and so on. This is something that is purely
handled by the system and application developers
don’t have to worry about at all. So the great thing with
accessibility focus and actions is that, most of the
time, they are handled by the system as well. But there are occasions
when the applications need to worry. I’m going to show you some code
examples before we are going into the demo about
how these can look. If you have a very simple view,
like on the first code example, it just adds an
OnClickListener, I think this should be familiar to anyone who
has done any Android app development. If you have this kind of view,
that is already totally handled by the system. Because by default action,
OnClick, that allows us to perform clicks from Braille
displays, for example, is calling the OnClickListener,
and that’s it. Now, let’s look at the
second code example. And there we have a different
kind of view. This view is a bit
more low level. It’s a custom view where we
handle Touch Events directly. So we have an OnTouchEvent
handler. And that will be
called whenever there is a Touch Event. And the view will directly
respond to Touch Event and call some internal function
of itself. So it’s not going through
the OnClickListener. This, obviously, won’t work
because the system won’t know what function to call when an
accessibility server invokes an action OnClick. So we have to fix that issue. Because, otherwise, the
user won’t be able to click on this view. One way is to refactor the view,
of course, by invoking the OnClickListener and then
letting the default action handling handle the situation. But if that’s not possible,
sometimes you don’t have access to your view or maybe
you can’t easily change the code, then it’s possible to
[INAUDIBLE] something that we call an accessibility
delegate. That allows you to, external
from the view, handle the accessibility actions and other
calls that are related to accessibility. And in this last code example,
we are doing that to take care of the OnClick call by ourselves
and call our internal function. And that will fix the problem we
had on the previous slides. Now, depending on what kind of
view you are creating, we might have to add handling for
other actions as well. For example, if there is
scrolling support in your view, you might need to handle
the scroll action. It’s also important to remember
to call back to the super class if you don’t handle
the action yourself. So what can all these additions
be used for? We are adding something new. The new features of the
platforms enable us to add Braille support. Sometimes blind users prefer
to use something called a refreshable Braille display. And this is an alternative
way of interacting with the system. We are enabling a number of
different Bluetooth-enabled Braille displays to connect
to Android phones. A Braille display has a line of
so-called Braille cells on them with dots that
can be raised. And the user can read
on that line. There are navigation keys on a
Braille display so that, since it’s only one line, it’s
possible to move it around on the screen and move around in
the user interface, activity things and so on. Another thing that is usually
on a Braille display is a corded keyboard so that you
can also input Braille. And this makes it much easier
to type text than if you’re using a touchscreen. Of course, sometimes using the
on-screen keyboard is the only alternative. But if you have the Braille
keyboard it makes life much easier. So we are adding this
accessibility service, which is connected via Bluetooth
to the Braille displays. It uses the accessibility events
and nodes in the node tree to know what is
on the screen and present that to the user. It synchronizes with TalkBack
using accessibility focus. So you can actually interact
with the system in many ways at the same time. You can use TalkBack, or
you can use Braille. But they will both
be synchronized. In addition, we are adding an
input method so that you can actually enter text using
the Braille display. This new accessibility service
is called BrailleBack. It’s available on Google Play
Store today if you are running Jelly Bean. So please try it out if you
happen to have access to one of these hardware Braille
displays. I am now going to do
a little demo. And as we all know,
sometimes wireless technology can be difficult. So let’s see how this works. The first thing that can happen
is that a Braille display is disconnected from
the Android device. So what we wanted to make it
as simple as possible to reconnect to the Braille device
or, actually, if you have disconnected it for
before battery reasons. So what the user then
does is locks the screen and unlock again. And I’m going to ask Alan do
this for me today so that we get the display connected. COMPUTER SPEAKER: Screen off,
12:30 AM ringer seven, slide home, home screen three. PETER LUNDBLAD: And there,
we heard a little sound. And that made the Braille
display actually connect. Unfortunately, though,
it is not– as I mentioned, the
wireless is not– COMPUTER SPEAKER: Screen on. PETER LUNDBLAD: –always
our friend. So I’m going to try
this once more. COMPUTER SPEAKER: 12:31
AM, ringer 70%. PETER LUNDBLAD: We might have
too many [INAUDIBLE]. ALAN VIVERETTE: To minimize
interference, if you have Bluetooth turned on, if you
could turn it off, we would appreciate that. PETER LUNDBLAD: So this
is now working. COMPUTER SPEAKER: I/O 2012
web view tutorial. PETER LUNDBLAD: OK, so let’s
look what I can do here. So as I you can hear, the
speech is talking. I can now use the keys on the
Braille display to move around the screen. So let me do that. COMPUTER SPEAKER: Home
screen three. PETER LUNDBLAD: It says
home screen three. And as you can see, the
accessibility focus is moving to focus the whole screen. And the speech is
also talking. COMPUTER SPEAKER: I/O 2012 web
view tutorial, 031, Google I slash O 2012. PETER LUNDBLAD: Let me invoke
one of the actions I talked briefly about before. And that’s the action to open
the notification window. I do that by pressing
a key combination on the Braille display. COMPUTER SPEAKER: 12:31,
Thursday, June 28, 20. Screen will rotate
automatically. Check box orientation
set to portrait. Clear notification button. PETER LUNDBLAD: Here
I go to the– COMPUTER SPEAKER: Alan
Viverette, June 27, 12. Hey, want to get a coffee? PETER LUNDBLAD: OK, so
Alan is asking if I want to get a coffee. That’s great. Let me respond to that chat. What I also have on the Braille
display is a row of small buttons that easily lets
me click on anything that I have focused on the display. So I’m going to click on
Alan’s chat message. COMPUTER SPEAKER: Edit box. Type message. PETER LUNDBLAD: That takes me
right into the edit box in Google Talk as we expect when I
click on this notification. I’m going to move upwards on
the screen to see what he actually said. COMPUTER SPEAKER: This chat
is off the record. Hey, want to get a coffee? PETER LUNDBLAD: As
you can see, he’s concerned about privacy. But he still wants to
have some coffee. COMPUTER SPEAKER: This chat
edit box, type message. PETER LUNDBLAD: I’m going
to type a response. COMPUTER SPEAKER: Y-E-S,
yes, I, I, L-O-V-3. PETER LUNDBLAD: OK, making a
typo, I can easily fix that. COMPUTER SPEAKER:
Three deleted. E, love, [RAPIDLY SPELLING LETTERS] PETER LUNDBLAD: All right,
so I’ve typed a response. I can use the small keys I
earlier mentioned to move around on the screen
sometimes. COMPUTER SPEAKER: This edit
box, yes, I love coffee. PETER LUNDBLAD: OK,
so I have now– I’m going to press the button
to actually send– COMPUTER SPEAKER: This chat– PETER LUNDBLAD: –this
message. COMPUTER SPEAKER: –is
off the record. Edit box. Yes, I love coffee. PETER LUNDBLAD: And there
you see that I can actually send a message. And it now appears in the
chat list before. I’m going to invoke another
global action that we have that’s very convenient. And that is a key combination,
again, on the Braille display. COMPUTER SPEAKER: Home,
home screen three. PETER LUNDBLAD: That takes us
back to the home screen. And with that, I’m going to
hand it over to Alan who’s going to talk about
touch exploration. ALAN VIVERETTE: Thank
you, Peter. I look forward to getting
that coffee later. So as we showed, you
can use your finger to explore the screen. So you can set accessibility
focus by touching your finger to the screen, as
you just heard. This provides random access to
on-screen content, which is really great if you are familiar
with what the screen looks like. So somewhere like the
All App screen you have a lot of buttons. And you can find things
pretty quickly. You can now double-tap to
activate the item that has focus with absolute certainty
that what you just heard is what’s going to be launched. Now, this is great. But having some deterministic
way to access items on screen is even better. So let’s say you have
a really big screen with one little button. I can move my finger around for
a long time and never find that button. But if I can touch my finger to
the screen and just swipe to the right to go to the next
item, I can find that button very quickly. And in fact, I can just keep
swiping right to go through every single item on screen. So we’ve added these swipe
gestures that I demoed earlier when Peter was talking. And we’ve also added gestures
for global actions. So Peter showed you home
on the Braille display. You can also draw a shape on
the screen to go home. We also sort back recent
applications and notifications. An accessibility service like
TalkBack or BrailleBack can also use gestures to manage
internal state. So in TalkBack, we have a
gesture that you can use to start reading the screen
by word or character instead of by object. So here’s a quick overview of
the gesture mapping that we have in TalkBack. You’ll see there are
a lot of gestures. And in fact, these aren’t
all of the gestures. We’ve left a little
bit of room for experimentation later on. So let’s do a quick demo
for explore by touch. All right, so first I’m going
to start with touch exploration. COMPUTER SPEAKER: Home
screen three. Apps. Home. Showing item three of five. ALAN VIVERETTE: So I can look
through my apps, random access by moving my finger. Or I can swipe left and right if
I know that what I want to find is probably a little
bit past Maps. COMPUTER SPEAKER: Messenger. Navigation. People. ALAN VIVERETTE: OK, so if I want
to launch People, I can just double-tap anywhere
on the screen. COMPUTER SPEAKER: Full contacts
drop-down list. ALAN VIVERETTE: And
I get contacts. So if I’d like to go back, I
can draw a back gesture. COMPUTER SPEAKER: Clear apps. ALAN VIVERETTE: Let’s
go back again. COMPUTER SPEAKER: Clear
home screen three. ALAN VIVERETTE: And let’s take
a quick look at the Google I/O. COMPUTER SPEAKER: I/O zero,
Google I slash O 2012. Google I slash O 2012. List showing one items. ALAN VIVERETTE: Here I
have a list of items. COMPUTER SPEAKER: Showing items
one to three of 21. ALAN VIVERETTE: And
I can tap– COMPUTER SPEAKER: 8:00 PM browse
sessions empty slot. ALAN VIVERETTE: –on an
item within that list. And if I want to move an entire
page at a time, there’s a gesture for– COMPUTER SPEAKER: 10:00 AM. ALAN VIVERETTE: –moving
up an entire page. COMPUTER SPEAKER: Wednesday,
June 27. ALAN VIVERETTE: So these are the
same accessibility actions that we use in BrailleBack. And they’re something that
you, as a developer, generally, won’t have
to worry about it. T. V. RAMAN: So notice that– ALAN VIVERETTE: Sorry. T. V. RAMAN: Notice what Alan
is showing there is a very, very powerful interaction model
for completing tasks very quickly. So you use the Play Store
all the time. So you know that the
Install button is approximately somewhere. So touching the screen and doing
one flick is pretty much all it takes. Whereas in ICS, you
would explore. And then before ICS, you would
have used the trackball. So it makes the user interaction
model really, really effective. And also, notice that with what
we have done, the access guidelines also change. Where in the past only things
that could take system focus were visible to TalkBack
before ICS. And we used to say make
things focusable. Now your life as a developer
is a lot easier. ALAN VIVERETTE: So as I
mentioned, as a developer, you generally won’t have to worry
about this, except when it doesn’t work. So you might wonder what
receives focus when I’m moving my finger around the screen. Now, obviously, as a developer
who’s probably made layouts before, you may know
that you’ll have a lot of nested layouts. Obviously, these aren’t
all being read. So we’re picking actionable
groups. So actionable means clickable
or focusable. And if you have a group, like
this folder icon that we’re showing in the image, this is
actually a group that contains an image and a piece of text. And because the group
itself is clickable, it gets read aloud. Now, if it has a content
description, then, instead of its children being read
aloud, the content description is read. And if you have an item that
isn’t actionable, and it doesn’t have any actionable
predecessors, obviously, the only way that can be read is
if somehow you can put accessibility focus on it. And fortunately, it will receive
accessibility focus. So here’s a Hierarchy Viewer
view of what that folder icon looks like. So you can see that there’s
a folder icon, which is a view group. And that contains an image view
and a bubble text view in XML that looks like this. So you can see the folder icon
is clickable, thus making an actionable group. And its children have
two piece of text. So the image view says folder. The bubbles text view
says Google. And when TalkBack puts focus on
this actionable group, it will say, folder Google. So some tips when you’re
designing your application, make sure you use built-in
widgets whenever possible. These things will just work. Because people have put a lot
of thought into them. Make sure your app works with
a keyboard or D-Pad. So what we always used to say
was make sure your app works with a D-Pad. And if your app did work with
a D-Pad, fortunately, it will just work. You may have to make some
changes if you were doing very special things. But for an application using
built-in widgets, it will most likely just work. Make sure that you have
readable content on actionable items. So if you have an image button,
give it some text in a content description. And if you have a view group
that’s focusable or clickable, put some text in it or put some
content descriptions on some of its child items. So here’s an example of bad
design and a way to fix that. So this is an orphaned
actionable item. You have a frame layout
that contains a view. This view is clickable. And it fills the entire
frame layout. This text view, which has text,
also fills the entire frame layout. And for a sighted user, this
looks like a big button with a text label that you
can click on. And that’s how it performs. But to a service like TalkBack
or BrailleBack, this looks like two separate items. So if you make the frame layout
clickable, and you put text inside of it, you have an
actionable group and a child that will be read out loud. All right. So sometimes it gets
a little bit more complicated than that. So let’s say you’ve made this
really awesome keyboard. And so here’s what it looks
like on screen. You’ve got a bunch of cool
little buttons that you’re rendering in code. So instead of having actual
views for each one of these buttons, you’re just drawing
them onto the screen. Here’s what the XML for
that would look like. And as you might be able to
guess, there’s not much of a hierarchy there, not many
actionable groups nor many readable children. And you can fix that by
providing more information to services like BrailleBack
and TalkBack. So this is what it looks like
without any changes to a blind user and to an accessibility
service. It’s just a big blank area. So fortunately, we have three
steps that you can take. One is, in your custom view,
handle incoming hover events. When accessibility is turned
on and explore by touch is turned on, when you touch the
screen, your view receives hover events. If you take a look at the
Android source code for view, you’ll notice that hover events
get handled a little bit specially if accessibility
is turned on. So here, because I know where
I’m rendering the keys on screen, I can map the
xy-coordinates of a motion event to a key. I can say, was I just
touching this key? If not, then I know that I need
to send the appropriate accessibility events. So as I touch the key, I’ll
get a hover exit from the previous key, a hover enter for
the new key, and TalkBack or BrailleBack will say
the appropriate speech for the key. Step two, you need to populate
that outgoing event. So you just sent a hover
enter event. You need to tell it what key
you were just pressing. So here, I’ve made this send
hover enter event for a key method that takes a key, takes
an event type, and populates the event with the information
for the key. So that would obviously include
things like text. And here I’m also setting
the source. Because to get this great Jelly
Bean functionality of being able to swipe and being
able to double-click to activate things, I need
a node hierarchy. So after I send my accessibility
event, that gets to TalkBack or BrailleBack. And it has a virtual key ID. It can then query my application
for the node info that’s associated with
that key ID. So here I’m using a node
provider, which I’ve taken out due to space constraints. But it has this
createAccessibilityNodeInfo event that takes a key ID. I map that key ID to
an actual key. And then I populate the node
info with the key’s properties. For consistency sake, I’m also
setting the parent of the node to be the keyboard that
it belongs to. And I’m setting its source to be
its own virtual key ID and, of course, its parent. So after taking these three
steps, my keyboard looks the same to every user. So if somebody’s using TalkBack
or BrailleBack, if they put their finger on
it, they’ll receive the appropriate spoken or
Braille feedback. And we don’t just handle
native Android views. We also handle web views
really well. And Charles will tell
you more about that. CHARLES CHEN: Thank you, Alan. Hey, so I’m Charles Chen. I’m here to talk about web
accessibility on Android. So Alan just gave you some
really great advice on how to make native Android apps
accessible, what you need to do, and how to do it. You can do the same thing if
you’re building a hybrid app. So if you’re building a hybrid
app that’s a mixture of web content and native Java
controls, you can make that accessible and make it work
really well for users with visual impairments. So if you’re using a web view
just to do something really simple or really basic– for example, maybe display a
terms of service displaying instructions to the user– then that case is pretty
straightforward. You have a web view. You put the text in there. And we’ll just process it
like regular plain text. Everything works, no
problem, simple. On the other hand, if you are
going to build something that’s a little bit more
dynamic, a little bit more Ajaxy, if you’re going to use
JavaScript HTML5 as part of your UI, you can still
make that accessible. You can still do a
great job here. All you really have to do is
to follow the same best practices that you would do for
a web app on the desktop. And the reason you do that is
because on Android, we’re running AndroVox. So AndroVox is a part
of ChromeVox. ChromeVox is our screen reading solution for Chrome OS. It runs on Chrome OS, Chrome. By the way, for those of you who
are interested in hearing more about ChromeVox, please
come to the talk on Friday, Advancing Web Accessibility. It’s going to be a
really good talk. And I’m saying that not just
because I’m one of the presenters. Rachel there is also going to be
presenting, as well as one of our other colleagues,
Dominick. So please attend that
talk Friday, 11:30. Hope to see you guys
there, please. Anyways, getting back to
Android, so AndroVox is a part of ChromeVox. And this gives you a
lot of benefits. So all of the hard work that
we’ve put into making ChromeVox works really well
on Ajax content, making it support W3C standards, such
as ARIA and HTML5. All of that goodness
comes into Android. And it just works. And we’ve integrated this with
Android so that the two experiences, both web content
and native Android controls, blend seamlessly. And the users can just
use your app. And they won’t even really
know the difference. And it will all just work for
them in a single simple experience. And so with that, I’m going
to switch over to demo. And so it’s usually helpful to
get onto the demo app that I intend to show with
web content. COMPUTER SPEAKER: Home. CHARLES CHEN: So
let me do that. COMPUTER SPEAKER: Home,
home screen three. I/O 2012 web view tutorial. I/O 2012 web view tutorial. Web content. Google I slash O 2012
web view tutorial. CHARLES CHEN: OK, so as you can
see here, I have a hybrid application. This has web content
near the top. And it also has Android controls
near the bottom. And I’m going to touch
the web content. And you’ll see that touch
exploration works the same way in a web view as it does
for any native control. So I’m going to start
touch exploring. COMPUTER SPEAKER: Accessibility
for Android’s web views is handled by
AndroVox, a port of ChromeVox for Android. CHARLES CHEN: OK, now, Alan
earlier was showing you gestures where you could
do swipes to do linear navigation. The same exact thing works
here in web views. So I’m just going to do that. COMPUTER SPEAKER: The same best
practices for building accessible websites apply for
making web views accessible. CHARLES CHEN: OK, so now, I’ve
actually reach the end of this web content. And I want to move forward. I shouldn’t really have
to care about that as an end user. So I’m just going to
keep navigating. COMPUTER SPEAKER: Previous
button disabled. CHARLES CHEN: OK, so I’ve
actually jumped out of the web content now. And I’m in the native
Android control. Now I’m going to go to the
next button and click it. COMPUTER SPEAKER: Next button. CHARLES CHEN: OK, and now
I’m going to click. COMPUTER SPEAKER:
Google I slash O 2012 web view tutorial. CHARLES CHEN: OK, so as you
can guess from the heading here on this in the web content,
this is probably not going to be a good slide. This is going to be something
that’s really bad. So here’s an example of what you
should never, ever do when you’re making a hybrid
application. This is an application that
doesn’t follow best practices, doesn’t do the right things. So let’s kind of go through it
and see what’s wrong with it. COMPUTER SPEAKER:
Web content bad. Heading one. Here is an example of a
poorly authored page. This button is made up of
DIV and SPAN elements. It has no roles set. CHARLES CHEN: OK, so I’m about
to go to a button. It doesn’t have the
right settings on. So even though visually it looks
like it’s just a green button there and it looks really
pretty with 3D CSS, it’s really just DIV
and SPANs in there. It’s not labeled as ARIA. It doesn’t have any roles
labeled for it. So the user isn’t going to
know that it’s a button. There are no semantics
backing it. It’s just simple
DIVs and SPANs. So let’s see what happens
when I go there. COMPUTER SPEAKER:
OK, clickable. CHARLES CHEN: OK, so you know
it’s clickable because we could detect that there was
a click handler there. But aside from that, you didn’t
really know if that’s a button, a chat box. What is this thing, right? You don’t get any additional
feedback. And that’s because it
wasn’t properly set with a role attribute. Let’s try clicking
on this, though. Because, hey, it
says clickable. So what could happen, right? Let’s see. COMPUTER SPEAKER: Clicked. CHARLES CHEN: Huh? OK, I just clicked something. And I know I clicked it. But I totally missed that
alert that came up. So let’s see why
that happened. COMPUTER SPEAKER: When this
button is clicked, the alert region that is shown does not
have an alert role set, nor is it marked as a live region. T. V. RAMAN: So there are these
simple HTML attributes you can add. And you can look these up. There is a [? W3C ?]
spec around it. But as Java developers and
Android developers what you need to know is you need to
annotate your DIVs and SPANs with attributes that
give semantics. And when dynamic changes happen,
you need to annotate those regions as being
dynamically changeable. At which point, whatever adapter
technology the blind user is using then
knows to speak. In this case, the technology
is, like Charles explained, is ChromeVox. But this is not Android
specific. This is basically good
accessibility practice on the web. CHARLES CHEN: Thanks, Raman. So that was some great advice. And again, if you want to
hear more, please come to our talk on Friday. OK, let’s move on
to an example of where this is fixed. COMPUTER SPEAKER: Next button. Google I slash O 2012
web view tutorial. CHARLES CHEN: OK. COMPUTER SPEAKER: Web content. Good heading one. CHARLES CHEN: OK, that
heading actually sounds a lot more promising. Because it said good. So this should hopefully work. So again, I’m just doing swipe
gestures to navigate. And these are the same
swipe gestures as anywhere on Android. COMPUTER SPEAKER: Here is the
same page but with the problems fixed. The button now has its
role set to button. CHARLES CHEN: OK, so now
we’ve set the correct role for this button. Let’s listen to what
it sounds like now. COMPUTER SPEAKER: OK button. CHARLES CHEN: OK, so now this
is working as intended. Now the user knows that
this is an OK button. And they hear it. And it says, OK button
so great. Let’s try clicking it. COMPUTER SPEAKER: The
OK button has been pressed, alert. CHARLES CHEN: Cool, so
now I know that an alert has popped up. And it tells me the contents
of that alert. Great. So it’s working correctly. And that’s because– COMPUTER SPEAKER: The alert
region now has its role set to alert, which is treated as
a live region by default. CHARLES CHEN: OK. And so with that, I’m
going to switch over to testing on Android. OK, so testing for accessibility
on Android is something that’s really easy. It’s as easy as one,
two, three. And there’s really no excuse
for not doing it. Because it’s built
into the system. So Alan here is going to help
demonstrate just how easy it is to get accessibility
up and running. And we know you all
have devices now. So you really should
get this done. So Alan? ALAN VIVERETTE: Feel
free to try this at home or in the audience. COMPUTER SPEAKER: Home. Home. Home screen apps. Settings. Settings. CHARLES CHEN: So what Alan
is doing here is he went into settings. He’s going into accessibility. And he’s going to turn
on TalkBack. COMPUTER SPEAKER: Showing
items seven to 24. Accessibility. Accessibility. Navigate up. TalkBack on. CHARLES CHEN: OK, well, so
normally you would turn on TalkBack and make sure explorer
by touch is enabled and also enable additional
scripts for web accessibility to ensure you get all of the– COMPUTER SPEAKER: [INAUDIBLE]
accessibility allowed. CHARLES CHEN: –to ensure you
get all the AndroVox goodness. But since we already have this
enabled, we’re ready to go. So let’s start testing it. Let’s start experiencing what
the user would experience. So the best way to do it is
to just use your app. COMPUTER SPEAKER: Double-tap
to activate. Home. Home screen three. CHARLES CHEN: And we are
going to go into the Google I/O app– COMPUTER SPEAKER: Google
I slash O 2012. Google I slash O 2012. CHARLES CHEN: –and just try to
use it the way a user who’s using TalkBack would
be doing it. So use touch exploration to
feel around the screen. COMPUTER SPEAKER: 1:45
PM, 2:44 PM. Browse code labs empty,
start now. CHARLES CHEN: OK. COMPUTER SPEAKER: [INAUDIBLE]
enter for cache I/O 2012 brainpower FTW, check us out
in after hours, stream. CHARLES CHEN: And take advantage
of the great new gestures that we’ve added for
accessibility options. Use linear navigation. Try to move around your app. Try to scroll lists. See if it works, if it’s going
to do the right thing. COMPUTER SPEAKER: Show Android
Chrome, code labs. CHARLES CHEN: Cool. So what we’re really checking
for here is to make sure that everything in our app can
be done eyes-free. The user should be able to use
your app whether or not they’re looking at it. All critical information needs
to be conveyed to the user. Anytime the user does some
action, they need feedback. They need to know that they
actually did the action, that it’s working, that something
is going on. Now, Android linting tools
here are your friend. So earlier today, Romain Guy
mentioned that Android linting has gotten a lot of
good new features. And I think there’s a
talk on it later as well for Android tools. And one of these new features
is the ability to do some really simple checks
for accessibility. Now, this will not
catch everything. But this is a really good
starting point. And this catches a very annoying
but simple to fix error, which is missing
content descriptions. So if you run the lintel with
this command, as you see here, so if you run the commands, lint
will actually catch cases where you have image buttons
that are missing content description. For a blind user that’s using
TalkBack or BrailleBack, all they’re going to get is that
there’s an image here. But they’ll have no
clue what it is. So this is a really bad
thing if that happens. And this will catch
it for you. Now, you shouldn’t be afraid
to use this tool. It’s not going to interfere if
you have images that are just decorative. Because if you have something
that’s purely decorative, that’s not meant to convey any
information, that’s not actionable, it’s not really
meant to do anything, you can always tag that with
the null notation. So if you set at null,
then it will tell the tool to ignore it. It’s only decorative. It’s just eye candy. It doesn’t really do anything. Now, if you do what we talked
about here today, then you’re going to build an app
that’s usable. But I would really like to
challenge everyone here to go further, to go the next mile. Because it’s not about just
making things usable so you can kind of struggle
through it. It’s really about building apps
that are great, building apps that people love to use. So really, I think we should all
strive for building apps that are just as efficient to
use eyes-free as it is looking at the screen. And with that, I am going to
hand it back to Raman. T. V. RAMAN: Thank
you, Charles. So to wrap up, accessibility
on the Android platform really, really helps you reach
an ever increasing wide range of users. The platform does a lot
of work for you. But if you follow some of these
guidelines that we are giving you and do some of
the testing that Charles suggested, I guarantee you that,
not only will your apps be usable by blind users, by
low-vision users, by users on specialized interfaces, but you
will, in general, discover that your user interfaces
become more robust. They degrade gracefully, which
means that your application just works in environments that
you originally did not expect it to. And that’s a very, very
powerful thought. Accessibility is the
law in many places. If you’re selling to the
enterprise, if you’re selling to universities, your
applications cannot be used if they don’t meet certain
accessibility requirements. But that’s actually, in my
opinion, the initial educational reason why you
should be worrying about accessibility. In general, if you build your
apps to be accessible, my own experience has been that those
applications eventually end up being more usable
for everyone. As an example, we last year in
our I/O talk on accessibility showcased TuneIn Radio. I discovered that app two
and 1/2 years ago. And I loved it. It was very, very nicely done. We found it was accessible
out of the box. And today that is one of the
most used radio tune in applications on Android. So I think I’d love to see a lot
more of those coming from each one of you. Thank you. CHARLES CHEN: Also, before we
go to Q&A, I just want to mention that we showed a lot
of things here today. And I know you all are
dying to see a real display in real life. So please drop by our sandbox. Our sandbox is just out in front
here of this hallway. It’s accessibility. You can’t miss it. So please drop by
and say hello. And also, we have
partners here. And so come by and
check it out. OK, and so with that, we’ll
go to questions. Thank you. [APPLAUSE] CHARLES CHEN: Yes? AUDIENCE: Yeah, with more
complex items like multiple radio buttons in a group or
things that need to swipe to do an action, are we going to
have to include instructions for what exactly is going on,
which radio button is selected, whenever, say,
one is an option? ALAN VIVERETTE: So if you’re
using built-in radio buttons, no. You can just let the built-in
widgets do their job. For gestures, so as you may have
noticed when I was doing regular scrolling, I was
using two fingers. So when explore by touch is
turned on, your one-finger gestures simply become
two-figure gestures. CHARLES CHEN: Yes, sir? AUDIENCE: So my question
relates to content descriptions. Let’s say you had a list
of items, say movies. And then when you entered it,
you got an image of the movie’s poster art. And then you had the title. Is it appropriate to make the
poster a null content description? Should that somehow dynamically
be sent or just say, this is a movie poster? ALAN VIVERETTE: So I think, in
general, if you’re going to add a content description,
it should add meaningful information. So if you’ve already got the
title of the movie, and the movie poster just reiterates
the title, then you should probably avoid it. T. V. RAMAN: Yes, and I
definitely wouldn’t like the thing to say “movie poster.”
Because that doesn’t really give me that much more
functionality as an end user. So err on the side of making
your application less chatty. AUDIENCE: Thank you. CHARLES CHEN: Yes? I think you had it first. AUDIENCE: And what [INAUDIBLE] that he showed today are not
applicable for 4.0, right? It is available for only
Jelly Bean, right? T. V. RAMAN: Yes. ALAN VIVERETTE: Correct. AUDIENCE: Right, OK. And the question that I have is
typically for the standard object’s data fixed content
description. Is it possible to modify the
content description. For example, I’m using a web
view in my application. And whenever I launch my view,
it sees [INAUDIBLE]. And because I’m using, let’s
say, [INAUDIBLE] that I want to save something else, is
it possible to do that? CHARLES CHEN: So my advice to
you is to actually not put a content description
on web view. Because, if you do, your content
description will trump the normal behavior. So you’ll actually lose all of
our web handling abilities. And you’ll end up having to
reimplement the whole thing yourself, which is not
what you want to do. So rather, you shouldn’t
do that. Instead, you should offer your
page in a way that really follows HTML5 accessibility
best practices. And it should just work. If you have more detailed
questions about that, I’d be happy to chat with you
one-on-one offline. AUDIENCE: OK. CHARLES CHEN: And I’ll
look at your apps. AUDIENCE: OK, thank you. Thanks a lot. CHARLES CHEN: No problem. Yes, sir? AUDIENCE: Hi. Look just with a lot of apps you
often find that there’s a sort of help guide to
how to use the app. And this is not even in the
realms of accessibility. Now, seeing how you stumbled a
bit with getting to the Send button in the– was it the instant messaging
demo that you made earlier? When you first found that
program, did you have to sort of prod around to even
know that there was a Send button there? How can we make it sort of
intuitive but accessible at the same time? T. V. RAMAN: So you ask
a very good question. So one way you can actually make
it really intuitive for blind users is to have
things appear in places you would expect. I’ll give you an example
of this from real life. Tonight, after the Google I/O
party, you will all go back to your hotel rooms. And when you open your hotel
room door, you do not hunt around for the light switch. The light switch
is right there. And I personally would like to
see touch screen interfaces, in the next couple of years,
develop that level of consistency, where things are
where you expect them to be. Today, for a blind user, touch
exploration is our way out. So we explore. But that, as you observed,
is painful. And in a world where things have
sort of settled down– today, in the mobile
space we are all innovating very rapidly. And in some sense, all the user
interface controls are in different places depending
on what the designer thought was best. But hopefully, things will
settle down, where a year from now, just as today you don’t
have to hunt for the light switch in your hotel room, you
can find the OK button or the Install button without actually hunting on the screen. AUDIENCE: OK, can I quickly add,
do you think it would be a good idea, if you opened your
door to the hotel room and a voice said, “There’s a
light switch to your right”? T. V. RAMAN: That would
be a nice thing. But on the other hand, if it’s
always on the right, why do you even actually need
to say that? Because, for instance,
supposing we built a system like that. We said, every time you open
the door it says where the light switch is. What is a deaf user
going to do? What is a deaf-blind
user going to do? We end up– I think, it’s hard
to say these are mutually exclusive solutions. But user interfaces work best
when you don’t notice them. They walk up to a door. The type of door handle
tells you where you should push or not. The door shouldn’t say,
push me or pull me. CHARLES CHEN: Also, I’d like to
add that, in your specific question, that’s actually one
of the really powerful ways where you can use touch
exploration and linear navigation in tandem. If you remember during my
demo, I did some touch exploration first on
the web content. And I started doing gesture
navigation. And this is the same
thing that works in any part of Android. You can touch explore
to something. And then you can start doing
gesture navigation from that point onwards. So the first time you use an
app, it might take you some time to know where things
are laid out. But once you figure that out,
in the future, you can get into the general vicinity,
and then just use linear navigation to get there. T. V. RAMAN: So for instance, on
the phone today when I ran Chrome, I load CNN or BBC or
whatever sites that I read news on often. I know that the top 1/3 of that
page on that screen is navigational stuff, and things
that I would really not need to read on a regular basis. So I approximately put my finger
halfway down, hear a headline, and then say, OK,
now read from here. So it’s a powerful paradigm. But a year from now, as these
things get consistent, hopefully I’m sure we’ll saying
something more optimal. AUDIENCE: Thank you. CHARLES CHEN: Yes, sir? AUDIENCE: So we have the option
to use JavaScript to mimic native gestures, such
as a swipe or a rotate. Now that you have the ability
to go from– normally a one-finger would swipe or two
fingers would rotate. Now, you’re saying that I can
use two fingers to swipe with the touch feedback. Does that mean that web
developers can continue using JavaScript libraries that
mimic native gestures? Because that’s normally
been a problem. CHARLES CHEN: Yes, so basically
what’s happening here is that, as a developer,
you don’t really know the difference between– you won’t realize how
we’re doing the two-finger swipe thing. What’s happening is, on the
Android end, you code it the same way as you would
normally. And what you’ll get for the end
user is, if they use two fingers and they have touch
exploration on, you’ll see one finger. So you won’t actually
see that difference. T. V. RAMAN: So the answer to
your question is you can continue using those
libraries. And because we’ve done this
consistently at a platform level, blind users of your
application will know that they need to use an
additional finger. So that one finger that they’re
using for touch exploration basically to the
platform looks like a mouse pointer moving when
accessibility is on. AUDIENCE: Is there a way to
easily detect if accessibility is on on the device? CHARLES CHEN: Yes. AUDIENCE: And is it advisable to
change your content if that flag is on? T. V. RAMAN: So you can check
programmatically, yes. Unless you’re doing something
that is really heavily custom, where you think you can actually
provide better semantics, I wouldn’t actually
change the content. So, for instance, if you have
an extremely custom view. So Alan showed you the example
of a keyboard. But let’s say you develop
a fancy calendar app or something like that. And you have this custom canvas
into which you have built up your calendar model
using a couple of lists, a couple of grids, and whatever,
and you feel, as an app developer, that by saying list,
list, grid, and button the semantics of your app are
being lost, then you can basically implement your own
accessibility bits just the way we do for some of the more
complex platform widgets. But that’s the level at which
I would customize things. I would not sort of do a
separate view or a separate content layout. Because over time, the two
will go out of sync, and you’ll have problems. ALAN VIVERETTE: It’s very rare
that you’ll save time by implementing something
separate. CHARLES CHEN: Yeah, it only
makes sense if you were doing everything OpenGL
or something. And it’s just a plain list. And you wanted to just
have a symbol list. ALAN VIVERETTE: I should
mention, though, if you use a node provider, you can make
something that’s written in OpenGL using a surface renderer
or a GL canvas totally accessible. And it would be
indistinguishable from a real view hierarchy if you
implement the node provider correctly. CHARLES CHEN: Yeah. So in general, don’t
try to do that. T. V. RAMAN: So the extra code
you would be implementing is what Alan and Charles described,
which is the node providers and exposing– so you’d be exposing
semantics through those virtual hierarchy. CHARLES CHEN: Anyway, if you
have anything more specific, we’d be happy to talk to you
offline after this talk. Yes? And I think this might be the
last question, since we’re running short on time. AUDIENCE: Hi, my
name is Daniel. Thank you for this session. When Peter sent Alan the message
in the presentation saying that he loves coffee,
he managed to find the Send button. But the device did not give him
feedback that the message actually was not sent. We could see it. But we couldn’t hear it. So Peter thinks he
sent the message. But Alan is still
waiting for it. So what was the error in
the programming of the application? Or what was the source of this
not giving this important feedback message has
not been sent? PETER LUNDBLAD: That’s a great
question, actually. I think that it’s a balance
between, of course, giving too much feedback. In this case, we should probably
have given a little bit more feedback. But you can always go back and
check if the message was actually sent if you really
want to know. T. V. RAMAN: I think it was– also, when you do hit
the send button successfully, it does say sent. And I suspect what happened
here was we were all, our concentration was more on doing
a talk over demos as opposed to real usage. But the messaging application
does give you feedback when you send successfully. It doesn’t give you feedback
when you fail to send. And so lack of feedback there
is feedback, in some sense. CHARLES CHEN: And also I’d just
like to plug one of our new accessibility events,
successfully type announce. This is exactly the use
case where that would have come in handy. ALAN VIVERETTE: So
type announce– CHARLES CHEN: We’ll
be upgrading that. ALAN VIVERETTE: –is
when you want your app to just say something. If you send an accessibility
event with the type of announce, it will just
get read verbatim. CHARLES CHEN: Thank you. OK, so any last questions
or no? OK. ALAN VIVERETTE: One last plug,
we’re down the hall at the corner, accessibility booth. Come see us. T. V. RAMAN: Thank you, guys. And look forward to your apps. [APPLAUSE]

9 Comments on "Google I/O 2012 – Making Android Apps Accessible"


  1. It seems the only people who watch these accessibility videos are people who develop whilst they need the accesibility themselves. For us developer who are interested in opening up our app for accessibility, it would have been nice if you first showed a demo of a blind person using an app and what they actually want from the app. Explaining the API's doesn't help me understand where to put accessibility and where not.

    Reply

  2. Ok so there is a braille demo at 19:00 but it's completely incomprehendable, needed a split screen of the phone and a real explanation of what views where in focus / what was going on etc

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *