[colug-432] FWD: TOMORROW: Guest Speaker [improving web usability for the blind and everyone else]

Jeff Frontz jeff.frontz at gmail.com
Mon Nov 30 14:23:31 EST 2009


[This talk is on Tuesday, 01 December at OSU; the topic expands beyond
enabling the visually challenged to include  remote workers and
"crowdsourcing"]


---------- Forwarded message ----------
From: Tamera Cramer <tcramer at cse.ohio-state.edu>
Date: Mon, Nov 30, 2009 at 2:04 PM
Subject: TOMORROW: Guest Speaker
To: CSE Faculty <faculty at cse.ohio-state.edu>, CSE Grads
<grads at cse.ohio-state.edu>
Cc: "CRAMER, TAMERA" <tcramer at cse.ohio-state.edu>


Guest Speaker



Teaching the Web to Speak and Be Understood



Jeff Bigham

Department of Computer Science

University of Rochester



Tues., Dec. 1st,  2009

3:30PM

480 Dreese Labs

All interested parties are welcome to attend.

Refreshments will be available in the presentation



Abstract:



In this talk I'll describe my efforts to teach the web speak and be
understood in order to improve web access for blind people.



The web is an unparalleled information resource, but remains difficult
and frustrating to use for millions of blind and low vision people. My
work attempts to achieve effective personalized access for blind web
users with applications that benefit all users, even sighted ones.



I'll discuss the following projects to demonstrate how: (i) Usable
CAPTCHAs dramatically improve the success rate of blind users on
CAPTCHA problems and illustrate the potential of improving an
individual interaction, (ii) TrailBlazer helps users efficiently
connect interactions together by predicting what users might want to
do next, and (iii) WebAnywhere adds speech output to any web page
without installing new software, even on locked-down public terminals.
These projects have made significant advances in web accessibility and
usability for blind web users, and yielded general lessons applicable
for adapting, personalizing, and delivering better content to all
users.



Moving forward, I'm exploring projects that take crowdsourcing
accessibility beyond the web and into the real world. Mobile phones
with cameras, GPS, microphones, and other sensors are ubiquitous. How
can we provide tools that let blind people use their phones to make
better sense of their visual environments in the real world? I'll
describe early successes in this space achieved by using these sensors
to connect people with remote workers and outline a number of
usability challenges that need to be addressed to fully realize this
potential.



Bio:

Jeffrey P. Bigham is an Assistant Professor in the Department of
Computer Science at the University of Rochester and currently a
Visiting Scientist at MIT CSAIL. Jeffrey received his B.S.E degree in
Computer Science in 2003 from Princeton University, and his M.Sc. and
Ph.D. degrees both in Computer Science and Engineering from the
University of Washington in 2005 and 2009, respectively. His work
centers on web adaptation and automation, with a specific focus on how
to enable blind people and others to collaboratively improve their own
web experiences. For his work, he has won numerous awards, including
two ASSETS Best Student Paper Awards, the Microsoft Imagine Cup
Accessible Technology Award, the Andrew W. Mellon Foundation Award for
Technology Collaboration, and Technology Reviewís Top 35 Innovators
Under 35 Award.



Host: Christopher Stewart



More information about the colug-432 mailing list