Reinventing Firehttp://schepers.cc
Technology upside down and backwardsThu, 17 Jul 2014 07:57:25 +0000en-UShourly1http://wordpress.org/?v=4.0.1You’re drunk FCC, go homehttp://schepers.cc/youre-drunk-fcc-go-home
http://schepers.cc/youre-drunk-fcc-go-home#commentsThu, 17 Jul 2014 07:56:23 +0000http://schepers.cc/?p=716I just chimed in to the FCC to request that they stop the merger of Comcast and Time-Warner Cable. I don’t know if my voice will make a difference, but I do know that saying nothing will definitely not make a difference.

Here was my statement to the FCC (flawed, I’m sure, but I hope the intent and sentiment is conveyed):

Allowing the merger of Comcast and Time-Warner Cable will dramatically decrease consumer benefits and choice.

Some mergers can be good, allowing struggling companies to reduce losses; in this case, neither Comcast nor Time-Warner Cable is in a situation that needs this merger for financial stability; both companies are currently thriving in the marketplace.

Innovation and an open market for goods and services is in the best interest of the American people. This was clearly shown when the Bell System was broken up January 8, leading to the emergence of advanced competitive services, including cellular phone service, and lower prices. The FCC should take that as a model, and decrease the monopolistic merger of competitors, which decreases this innovation, price competition, and customer choice. Customer service is already notoriously poor at both companies, and decreasing customer choice is likely to make it harder for customers to receive adequate service.

Without competition, Internet providers have little incentive to provide either improved service or lower prices. The US is already widely regarded as having relatively expensive and slow Internet service compared to other industrial nations, and this merger threatens to make that worse.

In addition to the loss of benefits to the consumer, this merger threatens American jobs. When a merger occurs, service departments also merge, and workers lose their jobs. This is especially true when the mergers are in similar industries; some studies have shown an average of 19% job loss, far above the norm of 7.9% when the industries are unrelated. Comcast currently employs 136,000 people; Time-Warner Cable currently employs 51,600 people; if the average job loss takes place, that could mean approximately 35,644 jobs lost, or more conservatively 14,820 jobs, in a still-struggling employment market; many of these will be unskilled labor, which is even harder to resolve. While no laws in the US take into account the effect of job loss on mergers, this is still a factor that can be taken into account by the FCC; laws are only necessary when systemic problems arise in the behavior of key industry players and regulators, and allowing this merger could necessitate the creation of a law that would otherwise be avoided.

Please take the necessary steps to block this merger.

If you are a US citizen, you have until August 25th, 2014 to file a comment. The FCC seems to have gone out of its way to make this difficult, so here are some step-by-step instructions:

Fill out the Free Press petition first just in case. Then, if you want to register your opposition independently…

Enter “14-57” in the Proceeding Number field; you’ll get no immediate confirmation, but this is the code for the “Applications of Comcast Corporation and Time Warner Cable Inc. for Consent to Assign or Transfer Control of Licenses and Applications”. (Note: this is not arcane at all. That’s just an illusion.)

Fill in all required personal information

Ensure that the Type of Filing field is set to “Comment” (the default)

Write a text document explaining why this is such a bad idea; crib mine if you like, or find a much better rationale, but be sure to be clear in your opposition (or support, if you’re a masochist).

Upload your document using the Choose File button. (That’s right, you can’t just leave a comment in a text area, you have to write a separate document. The FCC seems to accept at least .txt and .doc files.) Add your optional description of the file in the Custom Description, so they know your sentiment even if they don’t open your file (which is pretty likely); I labeled mine “Block Comcast-TWC merger”.

]]>http://schepers.cc/youre-drunk-fcc-go-home/feed0A Life in a Day, in 2024http://schepers.cc/a-life-in-a-day-in-2024
http://schepers.cc/a-life-in-a-day-in-2024#commentsWed, 16 Jul 2014 02:03:48 +0000http://schepers.cc/?p=702I woke up startled; my glasses were ringing. I was late for a telcon… again. I’d stayed up working too late last night.

I slipped on my glasses and answered the call. Several faces popped up a few feet in front of my eyes. Okay, so it was a videocon… sigh. I muted and blanked my glasses, switched them to speakerphone, and placed them on the table, the lenses vibrating as speakers. I pulled on some clothes and rubbed my face awake, trotting into the bathroom with my glasses in my left hand. As I splashed some water on my face, I heard my name called from my glasses on the counter; “Doug, did you get in contact with them?”

“Specs, delay,” I told my glasses, and my phone agent told the other participants, politely, “Please wait for 10 seconds for a response.”

Drying my face quickly on a towel, I put my glasses back on, looked into the mirror, unblanked the camera and unmuted the mic, and replied, “Hey, folks, yes, sorry about that. I did talk to them, and they are pretty receptive to the idea. They have their own requirements, of course, but I’m confident that we can fold those into our own.” I noticed my own face in the display, broadcast from my camera’s view of my reflection in the mirror, and hastily straightened out my sleepy hair.

A few minutes later, when the topic had changed, I opened a link that someone dropped into the call, and started reading the document in the glasses’ display. With the limited space available, I scanned it in RSVP (rapid serial visual presentation) mode, but quickly found it too distracting from the simultaneous conversation, requiring too much concentration. So I muted and blanked again, and walked down the hall to my office. Ensconced in front of my big screen, I re-routed the call to use the screen’s videocamera and display.

On the screen, it was easier to scan the document at my leisure. I could easily shift my focus back to the conversation when needed, without losing my place in the document. I casually highlighted a few passages to follow up on later, and made a few notes. I did the same with another document linked from the telcon, and my browser told me that this page was linked to from a document I’d annotated several months before. I marked it to read and correlate my notes in depth after the call. One thing that stood out immediately was that both documents mentioned a particular book; I was pretty sure I’d bought a physical copy a couple of years before, and I stepped over to my bookshelves. I set my glasses camera on auto-scan, looking for the title via OCR, and on the third set of shelves, my glasses blinked on a particular book; sure enough, I had a copy. I guess I could have simply ordered a digital version, but I already the physical edition handy, and sometimes I preferred having a real book in my hands.

My stomach started grumbling before the call ended. I decided to go out to lunch. Throwing the book and one of my tablets into my bag, I asked my glasses to pick a restaurant for me. It scanned the list of my favorites, and looked also for new restaurants with vegetarian food, looking for a nice balance between distance, ratings, and number of current patrons. “I’ve found a new food truck, with Indian-Mexican fusion. It’s rated 4.5, and there are several vegetarian options. Dave Cowles is eating there now. It’s a 7-minute drive. Is that okay, or should I keep looking?”

“Nope, sounds great. Call Dave, would you?” A map popped up, giving me an overview of the location, then faded away until it was needed. A symbol also popped up, indicating that my call to Dave had connected, on a private peer session.

“Hey, Doug, what’s up?”

“I was thinking of going to that food truck…”, I glanced up and to the right, and my glasses interpreted my eye gesture as a request for more context information, displaying the name of the restaurant, “… Curry Favor. You’re there now, right? Any good?”

“I just got here myself. Want me to stick around?”

“Yeah, I’ll be there in about 10 minutes.” I headed out the door, and unhooked my car’s charger before I jumped in. My glasses showed the next upcoming direction, and the car infographics; the car had a full charge. “Music”, I said, as I drove off; my car interface picked a playlist for me, a mix of my favorites I hadn’t heard in a while, and some new streaming stuff my music service thought I would like. As I got out of range of my house’s wifi, my glasses switched seamlessly to the car’s wifi. It was an easy drive, with my glasses displaying the optimal route and anticipating shifting traffic patterns and lights, but I still thought how nice it would be to buy one of the self-piloted cars. My car knew my destination from my glasses, and it alerted me that a parking spot had just opened up very near the food truck, so I confirmed and it reserved the spot; I’d pay an extra 50¢ to hold the spot until I arrived, but it was well worth it. My glasses read out the veggie menu options out loud on demand, and I chose the naan burrito with palak paneer and chick peas; my glasses placed my order in advance via text.

I pulled into my parking space, and my glasses blinked an icon telling me the sub-street sensor had registered my car’s presence. Great parking spot… I was right across the street from the food truck. I walked over to the benches where Dave sat. “Hey, Dave.”

We exchanged a few words, but my glasses told me my order was ready in a flash. I went to the window, and picked up my burrito; the account total came up in my view, and I picked a currency to pay it; I preferred to use my credit union’s digital currency, and was glad when the food truck’s agent accepted it. “Thanks, man,” I smiled at the cashier.

Dave and I hadn’t seen each other in a while, and we caught up over lunch. It turned out he was working on a cool new mapping project, and I drilled him for some details; it wasn’t my field, but it was interesting, and you never knew when a passing familiarity might come in handy. With his okay, my glasses recorded part of our conversation so I could make more detailed notes, and his glasses sent me some links to look at later. We finished our food quickly –it was tasty, so I left a quick positive review– and walked to a nearby coffee shop to continue the conversation. While we were talking, Dave recommended an app that I bought, and I also bought a song from the coffee shop that caught my ear from their in-house audio stream; Dave and the coffee shop each got a percentage of the sale. I learned that the coffee shop got an even bigger share of the song, because the musician had played at their shop and they’d negotiated a private contract, in exchange for promotion of her tour, which popped up in my display; that was okay, I liked supporting local businesses, and I filed away the tour dates in my calendar in case it was convenient for me to go to the show.

Dave went back to work, and I settled into the coffee shop to do some reading. First I read some of the book I’d brought, making sure to quickly glasses-scan the barcode first so I could keep a log; I found several good pieces of information, which I highlighted and commented on; my glasses tracked my gaze to OCR the text for storage and anchoring, and I subvocalized the notes. I then followed up on the links from earlier; my agent had earned its rate, having found several important correlations between the documents and my notes, as well as highly-reputed annotations from others on various annotation repos, and I thought more about next steps. I followed a few quick links to solidify my intuition, but on one link, I got stopped abruptly at an ad-wall; for whatever reason, this site insisted I watch a 15-second video rather than just opting-in to a deci-cent micropayment, as I usually did when browsing. I tolerated the video –unfortunately, if I took my glasses off while it played, the ad would know– only to find that the whole site was ad-based… intolerable, so I did some keyword searching to find an alternate site for the information.

Light reading and browsing was fine in a public place, but to get any real work done, I needed privacy. I strolled back to my car –my glasses reminding me where I’d parked– and I returned home. Back in my office, I put on some light music, and started coding. I started with a classic HTML-CSS-SVG-JS doc-app component framework on my local box, because I was old-school, and went mobile from there, adding annotated categories to words and phrases for meaning-extraction, customizing the triple-referenced structured API, dragging in a knowledge-base and vocabulary for the speech interface and translation engine, and establishing variable-usage contract terms with service providers (trying to optimize for low-cost usage when possible, and tweaking so the app would automatically switch service providers before it hit the next payment threshold… I’m cheap, and most of my users are too). I didn’t worry much about tweaking the good-enough library-default UI, since most users would barely or rarely see any layout, but rather would interact with the app through voice commands and questions, and see only microviews; I paid more attention to making sure that the agents would be able to correctly index and correlate the features and facts. Just as I was careful to separate style from content, I was careful to separate semantics from content. At some point, I reflected, AIs would get powerful enough so that information workers wouldn’t have such an easy time making a living; I wondered if we’d even need markup or APIs or standards at all, or if the whole infrastructure would be dynamic and ad-hoc. Maybe the work I was doing was making me obsolete. “‘Tis a consummation devoutly to be wished,” I thought to myself wryly.

I put the finishing touches on the app prototype, wrote some final tests, and ran through a manual scenario walk-through to pass the time while the test framework really put the app through its paces, spawning a few hundred thousand virtual unique concurrent users. Other than a few glitches to be polished up, it seemed to work well. I was pretty proud of this work; the app gave me real-time civic feedback, including drill-down visualization, on public policy statements, trawling news sites, social networks, and annotation servers for sentiment and fact-checking; it balanced opinion with cost-benefit risk-scenarios weighted by credibility and likelihood, and managed it all with voting records of representatives. It also tracked influence, either by lobbying or donations or inferred connections, and correlated company ownership chains and investments, to give a picture on who was pushing who’s buttons, and it would work equally well for boycotting products based on company profiles as it would on holding politicians accountable. As part of the ClearGov Foundation’s online voting system, it stood a chance of reforming government, though it was getting more adoption in South America and Africa than it was in the US so far. Patience, patience…

Megan came home from work with dinner from a locavore kitchen; the front door camera alerted me to her approach, and I saw she had her hands full. “Open front door,” I told the house as I rose to help her. We ate in front of the wallscreen, watching some static, non-interactive comedy streams; we were both too tired to “play-along” with plots, character POV, or camera angles, and it wasn’t really our style anyway. I hadn’t gotten enough rest the night before, so I turned in early to read; the mattress turned off the bedside light when it sensed my heart-rate and breathing slow into sleep.

Note: This story of the Web and life in 2024 is clearly fictional; nobody would hire someone who’d worked in web standards to do real programming work.

]]>http://schepers.cc/a-life-in-a-day-in-2024/feed0Invisible Visualizationhttp://schepers.cc/invisible-visualization
http://schepers.cc/invisible-visualization#commentsTue, 22 Apr 2014 05:27:43 +0000http://schepers.cc/?p=688Last year, I put together a talk called “Invisible Visualization” on making accessible data visualizations. Several people have asked me about it, so I thought I’d write a post about it.

By “accessible”, I mean able to be consumed and understood by people with a variety of sensory needs, including people with visual limitations, cognitive impairments, mobility challenges, and other considerations. I provided a few simple ways to expose the data in SVG images, but mostly I described different constraints and suggested ways of thinking about the problems.

I didn’t want to lecture people about the need for making their content accessible; I wanted to excite them about the possibilities of doing so. It’s great that there are legal regulations addressing the needs of people with disabilities (like the “Section 508” laws here in the US), but that’s not going to empower and motivate developers and designers to want to meet these kinds of design constraints and solve these kinds of technical challenges. I sought to avoid the “threat and guilt” trap that I’ve seen too many accessibility talks fall into.

I originally created the talk for the amazing OpenVis Conf 2013 in Boston, put on by Bocoup. It was a little rough, but it was received well. You can watch the video of that presentation if you’d like to see it.

The audience was data visualization folks, so it was a novel take on the topic for many of them.

I was asked to repeat the presentation at John Foliot’s Open Web Camp 2013, which had more accessibility experts in the crowd; I was nervous about that, since I’m far from an accessibility expert, but it was also got good reviews there.

In September of 2013, I gave an impromptu version of the talk at a local conference, NCDevCon, which led to a lot of really great discussions.

Encouraged by others, I submitted the talk to Fluent 2014 and CSUN 2014, and was accepted at both.

The turnout for my talk at Fluent 2014 was pretty low, since I faced a lot of good competition; those who attended my talk had really nice things to say about it, though. Truth to tell, however, I was disappointed in my presentation that day; I didn’t feel well, and I didn’t perform nearly as well as before, and failed to mention a few things I’d wanted to say. I only had 30 minutes, so I did feel a bit rushed. But several people said they got something out of it, so that was gratifying.

I was more intimidated by CSUN 2014, which is the largest and most respected accessibility conference. This was a type of audience I haven’t presented to before: almost exclusively hard-core accessibility professionals and people with accessibility needs themselves. I had a full hour, but I needed it; I had to change the way I was presenting my slide material, which is highly visual, to make sure that my blind audience members could experience it as well. My W3C colleague Mark Sadecki helped me a lot with this simple piece of advice: if there’s something on the screen, describe it; even if a blind listener doesn’t need to know that it’s the information is bar chart or a picture of a manatee to get the gist, they will want to be able to talk about it with other audience members later, so err on the side of being descriptive, and give them the full experience. But I nailed it! I had a very receptive audience, and I hit just the right notes at just the right time. I was even thanked by blind audience members for my slide descriptions. Sometimes things just go right; I only wish they’d recorded this particular presentation. I also learned quite a lot at CSUN from other people doing amazing and inspirational work in accessibility.

Each time I gave the talk, I refined it a bit more, adding slides, tightening it up, and generally improving it. You can see my Invisible Visualization slides on the W3C site… but beware, the have a couple quirks:

The “slide deck” is just a set of individual SVG files strung together with a common script file, and each one is a little webapp in itself, some with more interactivity than simple bullet points; you navigate them by using the arrow keys, with down key for next bullet point, and left and right keys for changing the slides;

They don’t have a lot of text, so they aren’t really self-explanatory; to get the full impact, you should watch a video of one of my presentations; I’ll try to find time to make an updated video of my presentation, to give it better context.

One of my more popular slides is a demo on the sonfication of a line-chart; sonfication is the representation of information with sound, rather than visually. To make this demo, I just took an SVG line chart (using a <polyline> element), and ran a “cursor” line across it (using the arrow keys); I found the intersection of the 2 lines, calculated the y-position of the intersection, then set that as the frequency of an oscillator node using the Web Audio API. (It’s no coincidence that I happen to be the W3C staff contact for the Audio Working Group, and a long-time participant of the SVG Working Group.) At Open Web Camp, I had the fortune to meet Gerardo Capiel of Benetech, which runs the DIAGRAM Center; I collaborated with Gerardo to refine my sonification demo, and we adapted it to make a Web-based prototype of the MathTrax, a graphing calculator for blind people. If you want to help me refine it further, you can check out my sonifier code on GitHub. I have a lot of improvements to make, but hopefully I can find time in the next few months.

If you are interested in the topic of accessible SVG, you can join the W3C Accessible SVG Community Group. We haven’t been very active up to now, but in June–July 2014, I’m carving out some time to focus on making and running basic SVG accessibility tests to establish the current state of support for SVG in screenreaders and other Accessibility Technology, and if you’re a person who writes tests or who uses a screenreader, you are most welcome to help out. This is the fundamental work that needs to get done in order to move us forward toward a more accessible graphical Web.

]]>http://schepers.cc/invisible-visualization/feed1Recipe for Spamhttp://schepers.cc/recipe-for-spam
http://schepers.cc/recipe-for-spam#commentsSat, 12 Apr 2014 23:30:06 +0000http://schepers.cc/?p=681I just found this combinatorial comment trapped in my spam filter. It’s interesting to see how the raw script is composed, so they can randomly select phrases, perhaps to throw off spam filters; this one was obviously a misfire.

It’s like a Choose-Your-Own-Adventure or Ad-Lib, but for spam.

I was impressed that it did have some minimal contextualization, automatically pulling in the title of my previous post and of my blog itself (“Annotators Anonymous � Reinventing Fire”), but was disappointed that they don’t have some sort of topic map to try to pull in related terms to truly customize the comment in a more sophisticated way.

It’s much longer than a typical spam, and pretty repetitive, with several greeting intros, so I assume that in addition to selecting phrases, whole paragraphs were included or removed. It’s not clear if this was posted by a bot, or by a human who was meant to manually select the phrases and context; I hope for the sake of some underpaid person that it was a bot.

I’m rather skeptical about the topic of the link they chose, to “burberry outlet”, which seems to be some sort of knockoff purse vendor, and is unlikely to appeal to my audience; I’d have preferred something like a knockoff electronics shop in China, which would have had at least minimal appeal.

It linked to songsketches.com (a relatively cool domain name, presumably once populated with more compelling content), and it was posted by “agnes.bungaree@web.de” (IP address 36.250.191.69) on 2014/04/05 at 11:22 am ET.

Some of my favorite highlights:

It asks if I get a lot of spam, and how I combat it (very meta!)

There is a section with a variety of smileys and winkies, giving them the emotional range of a fine actor

The adjective options (and there are a lot of them!) are all very positive and encouraging, a touch I appreciate and which flatters my ego.

C+ for content, D for spelling (quite a few errors, which I’d guess would make the spam more detectable, and which frankly grates on my nerves), F for execution. However, I will give them a passing grade overall, because they did show their work.

A lot of times it’s {very hard|very difficult|challenging|tough|difficult|hard} to get that “perfect balance” between {superbusability|user friendliness|usability} and
{visual appearance|visual appeal|appearance}.
I must say {that you’ve|you have|you’ve} done a {awesome|amazing|very
good|superb|fantastic|excellent|great} job with this. {In addition|Additionally|Also}, the blog loads {very|extremely|super} {fast|quick} for me on {Safari|Internet explorer|Chrome|Opera|Firefox}.
{Superb|Exceptional|Outstanding|Excellent} Blog!|
These are {really|actually|in fact|truly|genuinely} {great|enormous|impressive|wonderful|fantastic} ideas
in {regarding|concerning|about|on the topic of} blogging.

You have touched some {nice|pleasant|good|fastidious} {points|factors|things} here.
Any way keep up wrinting.|
{I love|I really like|I enjoy|I like|Everyone loves} what you guys {are|are usually|tend to
be} up too. {This sort of|This type of|Such|This kind of} clever work and {exposure|coverage|reporting}!

Keep up the {superb|terrific|very good|great|good|awesome|fantastic|excellent|amazing|wonderful} works guys I’ve
{incorporated||added|included} you guys to {|my|our||my
personal|my own} blogroll.|
{Howdy|Hi there|Hey there|Hi|Hello|Hey}! Someone in my {Myspace|Facebook} group shared this {site|website} with uss so I came to {give
it a look|look it over|take a look|check it out}. I’m definitely
{enjoying|loving} the information. I’m {book-marking|bookmarking} and will be tweeting this to my followers!
{Terrific|Wonderful|Great|Fantastic|Outstanding|Exceptional|Superb|Excellent} blog and {wonderful|terrific|brilliant|amazing|great|excellent|fantastic|outstanding|superb} {style and design|design and
style|design}.|
{I love|I really like|I enjoy|I like|Everyone loves}
what you guys {are|are usually|tend to be} up too.
{This sort of|This type of|Such|This kind of} clever work
and {exposure|coverage|reporting}! Keep up the {superb|terrific|very good|great|good|awesome|fantastic|excellent|amazing|wonderful} works guys I’ve {incorporated|added|included} you guys to {|my|our|my personal|my own} blogroll.|
{Howdy|Hi there|Hey there|Hi|Hello|Hey} would you mind {stating|sharing} which blog platform you’re {working with|using}?

I’m {looking|planning|going} to start mmy own blog {in the near future|soon} but
I’m having a {tough|difficult|hard} time {making a decision|selecting|choosing|deciding} between BlogEngine/Wordpress/B2evolution
and Drupal.The reason I ask is because your {design and style|design|layout} seems different then most blkogs annd I’m looking for something
{completely unique|unique}. P.S {My apologies|Apologies|Sorry} for {getting|being} off-topic but I had to ask!|
{Howdy|Hi there|Hi|Hey there|Hello|Hey} would you mind letting me know which {webhost|hosting company|web host} you’re {utilizing|working with|using}?
I’ve loaded your blog in 3 {completely different|different} {internet browsers|web browsers|browsers}
andd I must say this blog loads a lot {quicker|faster} then most.
Can you {suggest|recommend} a good {internet hosting|web hosting|hosting} provider
at a {honest|reasonable|fair} price? {Thanks a lot|Kudos|Cheers|Thank
you|Many thanks|Thanks}, I appreciate it!|
{I love|I really like|I like|Everyone loves}
it {when people|when individuals|when folks|whenever people} {come together|get together} and share {opinions|thoughts|views|ideas}.

Great {blog|website|site}, {keep it up|continue the good work|stick
with it}!|
Thank you for thee {auspicious|good} writeup. It in
fact was a amusement account it. Look advanced to {far|more} added agreable from you!

{By the way|However}, how {can|could} we communicate?|
{Howdy|Hi there|Heyy there|Hello|Hey} just wanted
to give you a quick heads up. The {text|words} in
your {content|post|article} seem to be running off the screen in {Ie|Internet explorer|Chrome|Firefox|Safari|Opera}.
I’m not sure if this is a {format|formatting} issue or something tto do with {web browser|internet browser|browser} compatibility but I {thought|figured} I’d post to let you know.
The {style and design|design and style|layout|design} look great though!
Hope yyou get thhe {problem|issue} {solved|resolved|fixed}soon.

{Kudos|Cheers|Many thanks|Thanks}|
This is a topic {that is|that’s|which is} {close to|near to} mmy heart…
{Cheers|Many thanks|Best wishes|Take care|Thank you}! {Where|Exactly where} are your contact details though?|
It’s very {easy|simple|trouble-free|straightforward|effortless} to find out any {topic|matter} on {net|web} as compared to {books|textbooks},
as I found this {article|post|piece of writing|paragraph} at this
{website|web site|site|web page}.|
Does your {site|website|blog}have a contact page? I’m having {a tough time|problems|trouble} locating it but, I’d like to {send|shoot} you an {e-mail|email}.
I’ve got some {creative ideas|recommendations|suggestions|ideas} for yyour blog you might
be interested in hearing. Either way, great {site|website|blog} and I
look forward to seeing it {develop|improve|expand|grow} over
time.|
{Hola|Hey there|Hi|Hello|Greetings}! I’ve been {following|reading} your
{site|web site|website|weblog|blog} for {along time|a while|some time} now and finally goot the {bravery|courage} to go ahead and give you a
shout out from {New Caney|Kingwood|Huffman|Porter|Houston|Dallas|Austin|Lubbock|Humble|Atascocita} {Tx|Texas}!

Just wanted to {tell you|mention|say} keep up the {fantastic|excellent|great|good}
{job|work}!|
Greetings from {Idaho|Carolina|Ohio|Colorado|Florida|Los angeles|California}!

I’m {bored to tears|bored to death|bored} at work so I decided too {check out|browse} your {site|website|blog} onn my iphone during lunch break.
I {enjoy|really like|love} the {knowledge|info|information} you {present|provide}
here and can’t wait to take a look when I get home.
I’m {shocked|amazed|surprised} aat how {quick|fast} your blog loadedd on my {mobile|cell
phone|phone} .. I’m not ecen using WIFI, just 3G .. {Anyhow|Anyways},
{awesome|amazing|very good|superb|good|wonderful|fantastic|excellent|great} {site|blog}!|
Its {like you|such as you} {read|learn} my {mind|thoughts}!
You {seem|appear} {to understand|to know|to grasp} {so much|a lot} {approximately|about} this, {like you|such as you} wrote the {book|e-book|guide|ebook|e book} in
iit or something. {I think|I feel|I believe} {that you|that you simply|that you just} {could|can} do with {some|a few} {%|p.c.|percent} to {force|pressure|drive|power} the message {house|home} {a bit|a little bit}, {however|but} {other than|instead of} that,
{this is|that is} {great|wonderful|fantastic|magnificent|excellent} blog.
{A great|An excellent|A fantastic} read.{I’ll|I will} {definitely|certainly} be back.|
I visited {multiple|many|several|various} {websites|sites|web sites|web pages|blogs} {but|except|however} the audio {quality|feature} for audio songs {current|present|existing} at this {website|web
site|site|web page} is {really|actually|in fact|truly|genuinely} {marvelous|wonderful|excellent|fabulous|superb}.|
{Howdy|Hi there|Hi|Hello}, i read your blog {occasionally|from time to time} and i own
a similar one and i was just {wondering|curious} if
you get a lot of spam {comments|responses|feedback|remarks}?
If sso how do you {prevent|reduce|stop|protect against} it,
any plugin or anything you can {advise|suggest|recommend}?
I get sso muhh lately it’s driving me {mad|insane|crazy} so anyy {assistance|help|support} is very much appreciated.|
Greetings! {Very helpful|Very useful} advice {within this|in this particular} {article|post}!
{It is the|It’s the} little changes {that make|which will make|that produce|that will make} {thebiggest|the largest|the greatest|the most important|the most
significant} changes. {Thanks a lot|Thanks|Many thanks} for sharing!|
{I really|I truly|I seriously|I absolutely} love {your blog|your site|your website}..

I {like|wanted} to write a little commment to support
you.|
I {always|constantly|every time} spent my half an hour to read this {blog|weblog|webpage|website|web site}’s
{articles|posts|articles or reviews|content} {everyday|daily|every day|all the time} along with a {cup|mug} of coffee.|
I {always|for all time|all the time|constantly|every time} emailed
this {blog|weblog|webpage|website|web site} post page to all my {friends|associates|contacts},
{because|since|as|for the reason that} if like to read it {then|after that|next|afterward} my {friends|links|contacts}
wilkl too.|
My {coder|programmer|developer} is trying to {persuade|convince} me to move to .net from
PHP. I have always disliked tthe idea because of the {expenses|costs}.

But he’s tryiong none the less. I’ve been using {Movable-type|WordPress} on {a number of|a
variety of|numerous|several|various} websites for about a year
and am {nervous|anxious|worried|concerned} about switching to another platform.

]]>http://schepers.cc/recipe-for-spam/feed0The Recline of the Mobile Webhttp://schepers.cc/the-recline-of-the-mobile-web
http://schepers.cc/the-recline-of-the-mobile-web#commentsFri, 11 Apr 2014 20:28:33 +0000http://schepers.cc/?p=666I’ve seen much chicken-belittling of the future of the Web on mobiles, be it ruminations on its decline or wishful thinking. Regardless of the intent, such opinion pieces always point to the same factoids:

people are using apps on their mobiles more than they use the Web

mobile is the future

I don’t contest either of these claims. I just think they’re irrelevant.

These articles relegate the future Web to a role as a repository of “long-tail content” (their catchword for diverse or niche content) or as a try-before-you-buy for wanna-be apps.

Sometimes these writers have good intentions: they are making a rallying cry for the Web, and that’s a fine motivation. Sometimes they are just trying to stir up controversy or get noticed. Either way, I don’t think this message is quite so deserving of the attention it gets, and I’m puzzled that this same message seems so recyclable. Pardon me while I trash it instead.

For a starters, there will always be more niche content than there will be mainstream content. I think the growth of the Web over the past 25 years has showed the dramatic diversity of expression, collaboration, and community that could (and will) only thrive in a niche-friendly ecosystem. People like mainstream content, but they love stuff that only interests them and their niche community. So, I don’t see anything wrong with serving this multitude of niches; the Web is a niche that can’t be scratched.

And when you look at the “traditional” Web of documents and data, the Web has gotten much more sophisticated; it’s far easier to find the information you want now. Search engines have gotten so good at extracting useful bits of data from websites to present on their results page, I frequently (yes, guilty) don’t even visit the primary source pages when I’m looking for some basic information (e.g. when a movie was made or some more significant date in history, or the name of a song based on some lyrics, or other burning questions); that content wouldn’t exist, though, without the variety of pages feeding the aggregators. And when I follow a link from my Twitter or Facebook app, I read the article or blog post in a mobile app called a “browser”. So, while I don’t spend as much time on the Web as I do in apps, it’s more a mark of the improvement of the Web, both in creating content and in helping me find and consume it.

But let’s pretend that this wide and lasting appeal isn’t enough. Let’s look at current and trending usage on mobiles.

Once upon a time, mobile phones were primarily used to talk to other people. How quaint. Now, most mobile Internet traffic (by volume) is video; and even that is going to embrace Web tech as they enable outbound links and inbound annotations. Will it matter if it’s in the browser itself, versus a specialized media viewer, if it’s still using Web tech?

Regarding games (that bellwether of app trends), I saw Brendan Eich’s keynote at Fluent Conf 2014 a few weeks ago, and he was showcasing a first-person shooter in WebGL, running in the browser. He was illustrating how powerful and performant the Web has gotten, but I could only think, “That’s not the Web, that’s just an executable runtime.” Sure, it’s running in the browser, but where are the links? Where is the API? Where is the human interaction? Show me that same app, but include a WebRTC videochat line, and then I’ll start to get interested.

Yeah, my own mobile use dramatically favors apps: apps like Candy Crush Saga. But when I want something more substantive than a time-waster, the browser is there for me. It’s not the quantity of time that someone spend on their phone that’s the telling factor, it’s the quality of engagement. It doesn’t matter that Candy Crush Saga isn’t made with Web tech (assuming it’s not); there’s nothing Web-like about such games or apps. But if that really worries you, yeah, WebGL (and Canvas and SVG) will get you there too, in due course.

It wasn’t so long ago that Flash apps were going to kill the Web (remember Flash?), and many of those same developers moved from Flash to native mobile apps. They have a niche; they have a business model. But it’s not good for their customers, and it’s not going to kill the Web.

You want some trends?

Many installed “apps” are really just hybrid Web content containers (framing HTML5 content); many others repackage Web content in a custom format (Facebook? Twitter?), and would lose users if they didn’t also publish it on the Web

Up to a few years ago, ebooks were exclusively proprietary; now they are increasingly EPUB, which is just HTML5 repackaged (and IBM recently announced they are moving to EPUB-first over PDF)

Almost every bit of functionality that the mobile has, from geolocation to local storage to camera and microphone to battery life and mobile strength, is undergoing standardization for the open Web Platform

Given enough time (and the Web has time), there will be near-perfect parity between native mobile features and Web Platform features… except that the Web Platform will have everything the native platform has, and it will also have all the features and content of the Web. Unlike walled gardens, where these features grow fast and wild, the Web is moving more deliberately; features are developed more slowly because we are taking the best of breed, and making them interoperable across all platforms and devices, so you don’t have to make and maintain 4 (or more?) versions of your app. The Web advances in periodic waves of experimentation and stabilization, across many different fronts; if it seems to you that the Web is resting, it’s probably just priming for a surge, and you probably aren’t looking closely enough in the right places.

Yes, we need to work as fast and as hard as we can to achieve that parity; no, we can’t afford to get lazy and rest on our laurels. Yes, we need to make sure that webapp and content creation are as easy and powerful as for native apps. Yes, as Chris Dixon points out, centralization and concentrated influence are serious problems we need to combat. But I don’t think we have to worry about the very short term.

I think people are inadvertently reversing the underlying trend. Mobiles have always been a vendor-controlled ecosystem; they aren’t squeezing out the open Web, the open Web is seeping into the cracks in their defenses, like water. That leak will become a torrent; that trickle will become a tide.

So, if you’re on the side of the open Web, lean back, relax, and enjoy the sunset of locked-down systems, sinking into the tide. We have work to do come dawn.

You might be an annotator, too. If you think you might be, you should come to our support workshop in San Francisco in April.

I first realized I had an annotation problem about 3 months ago, at the Books in Browsers conference. I saw a great talk by Dan Whaley and Jake Hartnell of Hypothes.is, about their annotation engine, built on top of the Annotator project.

They demoed their browser extension, showing how they could select a passage of text, open up a sidebar to leave a comment on that specific passage; when the web page is reloaded, the extension finds the original selections for all the annotations on that page, and highlights the specific passage as you select and read each annotation. And you could even reply to annotations in a threaded conversation… annotating the annotations (whoah, dude, that’s so meta!).

I’d wanted this functionality for WebPlatform.org for a while; we have a primitive annotation system, but it only anchors on the section level, which is still better than comments at the bottom of the page, or on a separate page entirely. I immediately saw the potential in Hypothes.is’ much more sophisticated script library; as a tool for suggesting improvements, requesting expanded coverage, asking and answering questions, and generally peer-reviewing collaborative documents, this is a tried-and-true UI that’s a critical feature of office tools like Word and Google Docs.

Then I realized that we could use this same tool for W3C’s specs, to allow simpler and much more immediate and contextual feedback than the current clumsy system of firehose email lists and bug trackers.

In hindsight, I see that the signs of my annotation problem go back many years. I was jonesing for a way to improve the flow and timeliness of feedback from the average developer or designer, not just those working at big member companies (one of my main goals as W3C Developer Relations Lead). To get my fix, I’d prototyped a crappy annotation system (though I didn’t know that’s what it was) soon after I first started with W3C, but scrapped it when a less-crappy system was deployed on the HTML5 spec to let people file bugs right from the spec. It didn’t satisfy me, though… it’s not the same as a true annotation system where the annotations persist in context.

Once I saw that sweet, sweet new annotation engine, I had to have it. So, later this year, we’ll be experimenting in a couple of Working Groups (starting with the Audio WG) with allowing true annotations as a primary feedback channel. (I’ll keep you posted.)

And these spec annotations don’t have to be from humans; we already have automated scripts that decorate the specs with notifications of test coverage or implementation status, and what are those but a specialized kind of annotation? Yeah, see? Once you get a taste of annotations, everything starts to look like an annotation.

And I mean everything! (Well, no I don’t… but I do mean a lot of things!) Think about it: what’s your primary contributive activity on the web? Reading a blog or article or kitten meme or infographic is a consumer activity; sharing those artifacts, on Twitter, Facebook, Pinterest, Tumblr, or wherever, is another main activity, a distributive activity; leaving a comment on that artifact, or replying to someone else’s comment, is also a contributive activity, just as much as creating the artifact in the first place, and (let’s face it) far more common for most of us than creating the primary artifact (it took me ages to get around to writing this blog post, let me tell you).

So, yeah; most of our online contributive activity is some form of annotation. Whether your comment is in your tweet, or at the bottom of the article, or in some sort of threaded forum, when you’re talking about some other document that you’re linking to, you’re annotating.

Here’s a simple chart that shows just how deep I’ve gotten myself into this addiction (based on an original by Dan Whaley):

Once I realized that annotations were what I’d been craving, I started looking for them wherever I could. I found at least 40 companies that are not yet W3C members doing web annotations in one form or another; from ebook readers that let you share notes, to hip sites like Rap Genius or Medium or Quartz, to education/research tools like Diigo, to more traditional (but innovative) sites like New York Times or Financial Times. Why were they all doing annotations? Because having the comments in context, right there where the reader is looking, leads to more incisive, directed, relevant comments. Because the immediacy of annotations lets people point out small errors in otherwise good articles, or gems in otherwise mediocre articles. Because the reader comments can add value to the article, rather than just be a misinformed screed that sinks to the bottom of the page.

I dug into the topic for a couple months. I told myself that I could stop anytime I wanted, but deep down, I knew the truth.

I wanted to standardize annotations.

W3C had an experimental project back in the late 1990s called Annotea, but it didn’t really make it into standards. So, what could usefully be standardized? I have a few ideas.

Robust Anchoring

A lot of great research has gone into this, but it remains a hard problem. How do you link to a passage of text that might have moved or changed (possibly due to an annotation that suggested the change!), and which doesn’t have a built-in anchor in the markup? What if it’s the multi-page view of the article, rather than the single-page view? What if it’s a new version at a different URL, but is mostly the same content (like a W3C spec)?

Hypothes.is has a nice multiple-factor selection algorithm that is pretty rigorous about finding the right passage even across changes. I think that would be a useful feature to have in browsers.

But even then, they have to insert tags into the content in order to highlight the passage, and this is a pain if there are multiple overlapping annotations or if the annotation crosses over element boundaries (like the last sentence in one paragraph and the first in the next paragraph). It would be nice to be able to style a passage like you would an active selection, with a CSS selection pseudo-element that only allows you to change the color, background-color, and outline properties, which don’t affect reflow of document layout and are thus pretty computationally inexpensive.

Federation and Syndication

It’s great that forward-looking companies are providing an annotation interface to their own content, but it still doesn’t lead to really free, open conversations; for controversial sites, they still control which comments are approved, and that can lead to abuse just as surely as spam and trolling can. Individual commenting or annotation systems lead to fragmented online identities, and you still get personal-data silos even with distributed commenting systems like Livefyre or Disqus, or distributed logins with OpenId, Facebook, and Twitter. I’d like to be able to publish and aggregate comments across multiple systems at the same time. I’d like to be able to maintain a personal identity (or multiple pseudonyms or even anonymity) and authorship for my online content, not beholden to individual publishers (because that’s what commenting systems are: publishers), to share with specific groups or friends or just keep my notes to myself, and I’d like to be able to see comments by a particular user across the web, not just on a few sites. This feeds into the goal that some people call IndieWeb. Annotation services with multiple publishing channels could enable that.

Annotation Events

One of the parties that should be able to publish my annotations, if they wish, is the publisher of the original article. When Google tested the annotation waters with the now-defunct Sidewiki a few years ago, Jeff Jarvis rightfully complained that it effectively stole value from his own blog, by stealing away value-adding comments.

And it’s also hard for an annotation service to know when an annotation should be made, because many webapps swallow events.

To address these issues, I’d like to see a pair of annotation events: annotationstart that signals that a selection has been made, and annotationend that notifies the web page that an annotation has been made and where you can find the feed for it, so you can retrieve it via a REST API and bring in the best annotations into your own commenting system.

Data Model

To have this kind of syndication and federation, you need to have a common data model.

I had previously noticed that some folks had started a W3C Open Annotation Community Group, but it seemed a bit… Capital-S-Semantic for my tastes.

But once I read their spec and looked past the descriptions of RDF and SPARQL, I saw a sound data model that would be useful for interchanging annotations between services. Their model seems large, and at first glance might be overly complicated (or, more charitably, “robust”), but you don’t need to use all parts of it for every instance; an annotation can be really simple.

I personally don’t think that most people creating annotations will want to express them in RDF, but that shouldn’t matter. So long as an annotation can be expressed in an agreed-upon serialization, like a subset of HTML, it can be transformed into whatever representation the back-end system wants (including RDF).

Personally, I’d like to see a <note> element in HTML, with a client-side API for scroll position of a selections and other things. This could be used for comments (I don’t like the use of <aside> for this), and also for footnotes (like the clever way Wikipedia does them), which is a big issue for digital publishing.

Update:

]]>http://schepers.cc/annotators-anonymous/feed3Current State of Authoring Accessible SVGhttp://schepers.cc/authoring-accessible-svg
http://schepers.cc/authoring-accessible-svg#commentsWed, 04 Sep 2013 15:22:54 +0000http://schepers.cc/?p=583SVG has a few metadata features intended specifically for accessibility, and also provides the ability to use real text instead of images for textual content, just like HTML. This combination of text and metadata serves as one of the cornerstones of SVG’s accessibility; there are other features, such as scalability and navigation features, but this post will focus on the descriptive capabilities of SVG.

I was recently looped into a discussion on the @longdesc attribute in HTML which dealt in part with SVG accessibility, a subject I’m fascinated by. The specific debate is whether a @longdesc value should be applied for SVG, or whether SVG’s native accessibility features should be used, or both.

Let me explain @longdesc. For short descriptions of a picture, a few words to a few sentences, you can simply include the text in the <img> element’s @alt attribute. The @longdesc attribute of an <img> element allows a content author to add a URL that leads to a longer text description of the image. You can read more in this excellent article on @longdesc by WebAIM.

SVG provides a different way to provide text descriptions: the <title> and <desc> elements, which can be a child of any graphic or container element in SVG, and which contain text descriptions of the element. The <title> is meant for shortish names, while the <desc> can provide arbitrarily long descriptions; nothing in the SVG spec limits the length of these elements, but choosing the appropriate text is a best practice (as for any prose). These metadata elements can be read out via screenreaders (accessibility technology that speaks the available text aloud), along with the content of the <text>, <tspan>, and <textPath> elements.

Because each SVG shape (or group of shapes) can have its own <title> and <desc>, a certain amount of structure and sequential flow is available to these screen readers. Each text or metadata element is read out in document order. Together, the series of text passages can comprise a complete description of the SVG graphic. Which is kinda cool… self-describing images!

Well, that’s the idea, anyway. Now let’s look at it in practice, and specifically, at authoring SVG accessible content, and support in browsers and screenreaders.

Note: though I offer some specific guidance here, this is not really a tutorial; it’s more a background article on the topic, partly from a standardization point of view. I’ll be writing a tutorial aimed at developers and designers soon on WebPlatform.org.

Simple Example

Before we go any further, here is an example of a simple SVG file with titles and descriptions, just to provide some context:

And here’s what it looks like (proof positive that I’m not a designer):

Here, we have a title and description for the whole document, and the 2 triangle graphics elements in the document also have titles and descriptions; in addition, the <g> (group element) wrapping the triangles also has a title. In a more complex graphic, you might have nested groups, each describing their contents, which could be explored in hierarchical detail by a screenreader user. In this case, going into this level of detail on this simple graphic is overkill; the document-level description would suffice, and the extra titles and descriptions would be more information than the average user would want to know, so I would not consider this a best practice. But it serves as an easily understood entry point.

@longdesc

@longdesc is controversial. It was deprecated in HTML5, and resurrected by accessibility advocates; many differ strongly on its value. I’m not going to opine on the attribute in general, just its applicability to SVG. I’m not sure it’s necessary to use it for SVG. On the email thread, John Foliot, accessibility advocate and outspoken champion of the @longdesc attribute, pointed me to a blog post he’d written on HTML authoring tools for @longdesc, and asked if the authoring-tool landscape for adding <title> and <desc> in SVG was on par with that.

I think it is, for the most part… with some notable exceptions.

Inkscape

Inkscape is the most popular open-source SVG authoring tool. It provides a few ways to add metadata to various parts of the document.

Document-level title and description: this is probably the most important capability, though the least technically interesting; Inkscape gets this mostly right, though I have a suggestion for improvement. In the right-hand dialog, labeled “Document Metadata” (File → Document Metadata…), there is a space for a title (“Tilted Stars”), and another for the description (“Three 5-pointed stars in various colors, sizes, and positions”). Unfortunately, the description field populates an RDF metadata block (thus the confusing jargon “Dublin Core”, which is the name of a basic ontology around document details) rather than a <desc> element; this should actually be a simple fix for Inkscape, if we convince them that this is important… and it is important, because it will improve accessibility. (Technical note: they could kill two birds with one stone by using RDFa rather than XML-RDF; if you don’t understand what this means, that’s probably for the best.) In any case, authors are probably more likely to write a single title and description for the whole document, than for each element, and a simple overall description is more easily consumed, so the document-level metadata are important.

Per-shape title and description: this is more technically interesting, and more interactive, but also more labor-intensive. In the top-left dialog, labeled “Object Properties” (select a shape, then Object → Object Properties…), there are spaces for the title (“Star”) and the description (“A yellow 5-pointed star”), which are directly inserted into the document as child <title> and <desc> elements of the shape element, meaning that they describe that specific element. Why is this interesting? Because if each shape has its own title and description, a user can explore the document with a screenreader, instead of relying on the gist provided by the author; as the document changes, with shapes added or removed, there would not need to be a detailed overall description that might go out of sync. And while no screenreader I’ve seen does this, there is the possibility for even more in-depth automatic descriptions about the relationship of different shapes; in this case, a sophisticated user agent could describe the scene like, ”There is a large yellow star at the top; below, there is a smaller blue star on the left, and a small red star on the right.” This would be tricky to do for arbitrary graphics, but for well-defined, structured graphic types, like charts and graphs, it might be pragmatically possible; I’ll touch on this later, when I talk about ARIA.

Document tree view: the bottom-left dialog, labeled “XML Editor” (Edit → XML Editor…), shows each element in a nested tree format, including the <title> and <desc> child elements of the <path>. It’s a bit geeky, but you can use this dialog to add, remove, duplicate, or rearrange elements manually, so you could add <title> and <desc> by hand. But why would you, when there’s a dialog for that? Well, you could use it to add a document-level <desc>, or to group elements together and add a single <title> and <desc> to the group (i.e., <g> element). And it’s sometimes useful to see the structure of your document, in general.

I give Inkscape a B+ for basic accessibility metadata authoring! Fix how the document description is embedded in the file, and make the workflow a bit more intuitive, and Inkscape will get an A.

Illustrator

Adobe Illustrator is hands-down the most popular vector drawing tool for professional designers, and it exports SVG with no problem. It also allows an author to set a title and description for the whole document, though I don’t know how to do so at the level of the individual elements. If this functionality is there, it’s not immediately obvious to me; it would be easy to add, though these drawing tools are so complex already, it might get lost in the smorgasbord of features.

As it is, it’s not absolutely intuitive to get to this dialog, nor is it part of a typical workflow, unless the author is trying to embed license and attribution, since it’s part of the same dialog; I don’t know how common this is in practice. On the “File Info” dialog (File → File Info…), there are fields for Document Title and Description.

Unfortunately, and surprisingly, this does not save the file with an SVG <title> and <desc>, as you might expect; if you do choose the option to save the SVG file with XMP (Adobe’s Extensible Metadata Platform), it still doesn’t use the native elements, and inserts a huge block of RDF-XML markup, which seems to contain a raster (bitmap) fallback of the image, and dramatically bloats the file size!

option

file size

size increase

no metadata

715 bytes

1x

simple SVG <title> and <desc>

780 bytes

1.09x

XMP, no raster fallback

33,965 bytes

47.5x

XMP, with raster fallback (default)

47,596 bytes

68.7x

Adding a title and description makes the file nearly 70 times larger by default (or a mere 47 times if the raster image is manually removed), which is not Web friendly, and doesn’t help accessibility; that’s the worst of both worlds. As a exercise in absurdity, I opened a simple Illustrator SVG file in the ChromeVox screenreader, with title and description in XMP RDF markup just the way Illustrator saves it, and let it read out all the metadata; it took an astonishing 45 minutes of gibberish as the screenreader tried its best to meaningfully speak the “text” content (the embedded encoding as JPEG, and subsequent color settings) of a file that contained a simple yellow star. Obviously, this was not meant for consumption by screenreaders.

Based on this, I have to give Adobe Illustrator a D for basic accessibility metadata authoring. It doesn’t currently output accessible SVG, but it does have some of the infrastructure in place.

How could they fix it? Simple: add fields in the “SVG Options” save dialog for title and description, and save them as SVG elements. That would also solve the workflow problem of this being hard to find. Consider adding a way to add a title and description on individual shapes and groups, either in a dialog available from the Objects menu or in the Options modal dialog from the Layers sidebar (where you can currently set the id attribute from the Name field).

Hand-Authoring

I have to confess, having done SVG for 13 years or so, I am pretty comfortable with hand-authoring SVG. This is crazy, however, and I don’t recommend it. That said, it’s pretty simple even for an SVG newbie to take an SVG file you’ve drawn in an authoring tool, open in the text editor of your choice, and manually add metadata; browser debugging tools make it even easier to find just the right elements. If you’re modifying it for even the most basic interactivity, you’re probably going to be doing this step anyway, so adding accessible metadata while you’re at it is not an onerous task.

Most people don’t create their own SVGs. They typically reuse SVG icons or images from a image-sharing service like The Noun Project, OpenClipArt, or IcoMoon, and integrate them into their site as-is, linked directly from HTML or referenced from CSS, or combine them together in a single large SVG file with multiple images that make a single scene (which is what I often do). I call this combination of SVG images the “icon collage technique”. This has been extremely effective for me, especially when I’m making slides for a presentation, though it’s not without its challenges, mostly owing to the different sources of the images; the images are often of inconsistent quality, and almost none of them include their own titles or descriptions; from a combinatorial perspective, they are also less reusable because they typically aren’t auto-scaling, and most people don’t know how to resize the SVG. These image-sharing sites should optimize for this kind of reuse, and should provide titles and simple descriptions by default, so that an SVG collage is somewhat self-describing with no extra work on the part of the author.

Scripted SVG

This is the real payoff. Most SVG files, especially graphs and charts, are neither lovingly hand-crafted nor painstakingly drawn in a drawing tool. They’re scripted, server-side with R or PHP or other server languages, or increasingly client-side with JavaScript, with Raphaël or D3.js.

One of the most popular JavaScript libraries for creating data visualizations is D3.js, which stands for “Data Driven Documents. D3.js gives the author a lot of power, because it leverages SVG’s DOM structure and extends it with syntactic sugar and helper libraries. Adding a <title> element to a bar in a bar chart, for example, is trivially easy in D3.js (and Raphaël, for that matter).

Here’s a small demo of a “tooltip” example (source code) from chapter 10 from Scott Murray’s “Interactive Data Visualization for the Web”, a good book on getting started with D3.js. In this example, sighted users (without mobility problems) are shown the data in 2 ways, as a rough gist from the visual representation of the bars, and as raw numbers in the tooltip on hovering (a little nice-to-have easter egg); users of a screenreader will get the raw numbers from the title. The relevant part of the code, which creates a bar chart from a simple array of numbers (represented by the object d), is here:

.append("title")
.text(function(d) {
return d;
})

Four short lines of code, saying “add a <title> element here, with the text reflecting the value of the bar”. A meaningful description of the whole chart would be more effort, but remember that the elegance of SVG’s “discrete serial aggregate” metadata model (my fancy way of saying “building up the big picture one relevant bit at a time”) doesn’t need more lengthy descriptions for structured content; comprehension is an emergent property of the graphic, just as it is for sighted users.

This is the way to go for dynamically generated content that may change day-to-day, or even be updated in real-time.

A Real Example: Data Visualization

What did I mean by all that mumbo-jumbo? And isn’t that bar chart a little abstract, with no real data? Let me show you a different approach, with real data and a real bar chart (albeit a static one).

In fact, hidden metadata is not necessarily the best way to provide a description. Visible text can be even better, as the WebAIM @longdesc article says; that way, it benefits everyone. I took the liberty of recreating their bar chart from that example, in SVG (yes, I did it by hand, because there’s something wrong with me).

It’s not a perfect duplication, but it’s good enough. I took a couple liberties (like adding the percentage symbol to the values) to make it work better. If you follow the link and view the source, you’ll see that I didn’t really use <title> or <desc> elements… but this example should render in a screenreader almost identically to WebAIM’s @longdesc text:

Percentage of Total Noninstitutionalized Population Age 16-64 Declaring One or More Disabilities

Total declaring one or more disabilities 18.6%

Sensory (visual and hearing) 2.3%

Physical 6.2%

Mental 3.8%

Self-care 1.8%

Difficulty going outside the home 6.4%

Employment disability 11.9%

How? Trivial: I just made the text from the legend and the values labels of the bars the same <text> element, positioning the values above each bar with the <tspan> element. Remember, text in SVG is rendered (or read) in document order, not in positional order.

The beauty of this approach is that it degrades fairly gracefully in browsers that don’t support SVG (or even images), even for sighted users: the text content is just presented as text, so for information graphics, a simple list of the relevant data is better than missing the data completely.

And for authors, the wonderful thing about using visible text is that Inkscape, Illustrator, and I’d reckon any other vector editor already supports this splendidly, with lots of options to please designers. The days when you needed to use images for interesting-looking text are all but gone, with the rise of Web Fonts (and the WOFF specification) and the proliferation of hundreds of free and commercial fonts available for use on the Web. The only criticism I have for Inkscape and Illustrator here is that when the author chooses to convert the text to an outline rather than real text, so they can tweak the glyphs, neither authoring tool automatically provides a <title> with the replaced text for the containing group (or layer, in Illustrator parlance); this would be a simple and intuitive fix for accessibility, one that they author wouldn’t even have to know about or set explicitly. For this reason, I’m giving both Inkscape and Illustrator a B for accessible visible text authoring; each will get an A when they automatically provide the alternative text in the <title>.

Not to dismiss @longdesc, which may have value for photos and graphics that were created by someone else, but using visible text and metadata internal to the image file itself is far more elegant. No complicated procedure to create additional content which could get out of sync, no separate location/URL for that content, no need to force someone to navigate to that content… just a pure, simple inline experience of the same content as everyone else, just in a different modality, in which the description is an inherent characteristic of the graphic, and travels with the graphic wherever it is used.

The question now is, do screenreaders support it?

Browser and Screenreader Support

If you’re using a screenreader to read this article, you may not have experienced that elegant solution that I described. Why? Because your screenreader may not yet support SVG completely, or at all. To you, I apologize, and I’m working to make it better. Henny Swan, an accessibility expert, wrote a blog post a few years ago expressing concern about support for SVG in browsers and AT, which has genuine implications for accessibility. Fortunately, I believe that support has improved since then.

I’ve done several experiments with ChromeVox, a screenreader extension for Chrome, and it’s older cousin FireVox, for Firefox. Both will treat inline SVG (that is, SVG content inside an HTML file) as ”unknown elements”, and consequently will read out the text contents of elements like <title>, <desc>, <text>, <tspan>, and <textPath>, and will sometimes find links (SVG also uses the <a> element, though with slightly different attributes; this will change to match HTML5 for SVG 2). This is good. However, neither will read standalone SVG files, nor read SVG files referenced in <iframe>, <object>, <embed>, or <img> elements of an HTML page; this should be fairly simple to change, however, and I’ve started discussions with the developers about this. Since script libraries like Raphaël and D3.js produce inline SVG by default, this is not an issue for most JavaScript solutions.

On the one hand, screenreaders still present a challenge, and they are inconsistent in which elements they read out. In order to not provide too much data, they should limit what they read only to SVG elements, not any metadata (like RDF licenses or alternate encodings) that happens to be embedded in the file. In fact, the text contents of the <metadata> element should simply be skipped over completely, as should the contents of the <script> and <style> elements; none of these is meant for human consumption.

After Open Web Camp 2013 last month, where I gave my “Invisible Visualization” talk on making data visualizations accessible, I spent some time with a set of people –Glenda Sims, Estelle Weyl, Katie Haritos-Shea, and Denis Boudreau– to do some basic SVG-in-HTML testing with multiple browsers in combination with NVDA, JAWS, and VoiceOver (three common screenreaders), with generally positive results on reading SVG <title> elements. We didn’t record the results, and I didn’t examine the tests in detail, so we clearly need more robust testing, but anecdotally, I was pleased with the improvement in support from just a couple years ago.

James Craig, an Apple accessibility expert, informed me that in WebKit, the SVG DOM (that is, the browser’s internal model of the document) is exposed to the accessibility APIs, so <title> and <desc> elements should be available to any DOM-based screenreader. This is also encouraging.

Identifying and activating links inside SVG are still a consistency challenge for screenreaders (at least in ChromeVox); this is less important for simple images, but for anyone wanting to use an SVG as a kind of advanced image map, or who wants to add drill-down capability to their chart, this is a real problem that screenreaders need to address.

Also, currently all the text in the SVG is (at least in some screenreaders) read out as a single passage, rather than as structured, navigable items; it would be better to treat each textual element as an individual item, so a screenreader user could easily go back to repeat the last item, or the first item on the “list”. Treating the text content as a list (or set of nested lists) would be a good general algorithm.

To keep momentum going forward, I started a W3C Accessible SVG Community Group a few weeks ago, and it’s already got 18 participants; we’ve started planning a hackathon, maybe collocated with the SVG Working Group’s face-to-face meeting in October, to test SVG accessibility. We will publish a report, and after that, we should have a much better idea on where the current state of SVG accessibility support is, and what specifically needs to be improved.

Ugly Bits

Lest you think that browser support is the only problem here, let me disabuse you of the notion that SVG accessibility techniques are otherwise bulletproof. In the static chart example above, I told you the first thing someone using a screenreader should hear. But I didn’t tell you the last part, which the screenreader will speak after reading out the useful information:

These are the axes labels… mostly noise left over from the depiction of the data, an artifact of the visual aspect (well, actually, the y-axis label is useful information, but it’s presented out of context).

It’s easy enough for a screenreader user to simply skip over these… if they know that they aren’t important, and that they aren’t misplaced in the middle of important content. But how could they know that the series of numbers are just the axis tickmarks, and not a series of values? Especially people who’ve been blind from birth, and have never seen a bar chart, and don’t know what bits are core information and what bits are extraneous.

So, it’s important for an author to put the information in the right sequence, in the correct document order; necessary, but not sufficient.

An author could make judicious use of the aria-hidden attribute to hide the extraneous content from screen readers, while still rendering it on-screen. ARIA stands for “Accessible Rich Internet Applications”, and it defines a set of roles, states, and properties that help represent various widgets, such as forms or user interface controls, to a user that can’t see the visible indicators of state or intent; if you have a purely CSS checkbox or selector button, ARIA can help non-sighted users to know that it’s unchecked, or which option-state is selected.

The ARIA attribute aria-hidden=”true” is the screenreader equivalent of the CSS rule visibility: hidden;; in other words, while elements will still be rendered to the screen for sighted users, they will not be read out for non-sighted users. It‘s not necessary to use this on any graphics element, or to worry about the document order of graphics elements without titles, because they should be “invisible” to screenreaders anyway.

To emphasize that, at the risk of seeming obvious, screenreaders should not typically try to “read out” the characteristics of graphics elements directly. Perhaps there could be a special mode in which they do so, but even sighted users would have a hard time making sense of all but the most basic SVG shape descriptions; you might be able to mentally picture what this element looks like, what it’s shape is, where it’s positioned, how big it is, and what color it is:

It’s a simple shape, a purple hexagon; <path> elements get far more complex and impenetrable.

So, it‘s up to the author to provide text equivalents for different graphical shapes. But does it matter that on a bar chart, “Employment disability” is represented by a blue rectangle about 79 pixels high and 33 pixels wide? No. So the author has to provide an appropriate level of abstraction and meaning, as well.

In addition to not knowing what bits are important, another unfortunate aspect of simple document-order reading of text is that screenreader users can’t easily navigate around a chart, comparing the values of different bars, the way a sighted user could.

ARIA may be the answer here. As currently defined, ARIA is specifically aimed at user-interface controls, like sliders or choice menus. For the most part, GUI controls are a concrete set of well-defined and bounded conventions, each fitting a specific task, which users learn to manipulate in pre-determined ways; data visualizations represent a different set of conventions with a similar set of constraints. So, we could extend ARIA to represent the roles, states, and properties not only of GUIs, but of data visualizations, to allow visually-impaired users to explore and interact with the visualizations in much the same way as with a GUI. We could add much-need structure to these data visualizations, with new ARIA roles and states, and improve navigation options.

For example, look at a bar chart. A sighted user will scan the axes to contextualize, then compare different specific bars to one another, or look at the overall trend; a y-axis with the numbers 0–20, or an x-axis with a set of dates, is meaningful, because it sets the range of values or dates. For a screenreader user with some future ARIA markup on those axes, the endcaps of the numeric or date range could be spoken (e.g., ”0 to 20,000, in increments of 1 thousand”, or “January of 2007 to December of 2011, in quarters”). But for now, that’s speculative.

In the meantime, don’t add titles when visible text will suffice, and don’t duplicate information in a <title> or <desc> that’s already available as text. Duplicated information is repetitive, redundant, superfluous, periphrastic, and unnecessary. It’s extra work anyway, so just don’t do it.

Why Add Metadata?

Okay, so why should you do this in the first place? Assuming common decency isn’t enough reason, and that you’re laboring under the delusion that you don’t have a visually-impaired audience (or that you won’t be part of that demographic as you get older), you should understand that providing lots of titles and descriptions helps make your content more findable; Google indexes SVG files now, and they use text content as part of their search criteria. And it also makes it better for understandability; sure, it’s easy to find an star element by looking for

… but if you add a <title>Star</title>, it’s even easier to find, for you and for anyone maintaining or reusing your content. And if that’s still not enough rationale, in most browsers, the <title> is revealed as a tooltip on mouseover, so in your map of your country, each state or region gets its own tooltip for free.

Author Awareness

So, I’ve talked at length about the what, how, and why of SVG accessibility, and now I want to say something about the who: you! Well, you and a bunch of other people. None of this is hard, not nearly as hard as simply making people aware of the technique and the impact it has. So, if you’re using SVG, use these techniques so others become aware of them; if you’re not using SVG, start doing so. Tell other people about how to make their graphics accessible (and tell them to use SVG). If you have a script library or application that outputs SVG, try to make the SVG accessible.

For my part, once we have things a bit better defined, I’ll be writing this up in a more comprehensive, instructional style on Web Platform Docs, so more people discover it.

And if you’re interested in helping us clarify the current state of SVG accessibility, and to plan future improvements, I encourage you to join the W3C Accessible SVG Community Group. It’s a free and open group, with smart, passionate people who are helping make the world a better place (so if you’re not smart, apathetic, or kind of a cynical, self-absorbed jerk, you don’t need to join).

Describler

In writing this blog post, I scratched my own itch and started a little prototyping project called Describler to help improve your SVG’s accessibility. It lets you upload your SVG file, and provides you a graphical tree view of your current SVG, and lets you add or change titles and descriptions.

Using the miracle of modern browser technology, it even lets you hear how your SVG should sound in a screenreader, using the experimental Web Speech API (which sadly only seems to work in Chrome, and only then after you enable its flag). I’m hoping this spec moves forward quickly, and is adopted by other browsers, because it’s super-cool.

Describler does a few more things that help accessibility and reusability as well, like making your SVG auto-scaling, so you can use it at any size, and it will grow or shrink to fit the size of the container.

I did this as my W3C Geek Week project, where we set aside our normal work to improve our skills; I really enjoyed getting to code again, and I hope to keep expanding Describler, so maybe I’ll talk more about it another day. I have a bunch of ideas for improvements, and if you have suggestions too, make a comment or pull request on my github repo. (Be kind, please… I knocked this out in just a few days, and I know the code and interface are a bit ugly still.)

Conclusion

Adding titles and descriptions is only one aspect of accessibility in SVG. Interactivity and keyboard navigation, zooming and panning, link activation, document hierarchy, color choices, animation control, and many other factors come into play for various accessibility needs. But providing non-visual representation for users with visual disabilities is the most obvious accessibility challenge for a graphics format, and it is the closest analog of the behavior of @longdesc.

Not all authoring tools have sufficient support for adding accessibility. They should add it, and we (those of us in standards, in the SVG WG, and the people using those tools) should define better how they should do it. But adding accessibility metadata features are easy, and visible text is already well-supported in authoring tools, so I’m optimistic. I hope tools like Describler will help, as well, and will fill the gap until native authoring tool support is better.

My current inclination is that more testing is needed to determine how widely SVG accessibility features are supported, and in the meantime, I don’t see harm in providing a @longdesc, unless it simply duplicates what’s already in the SVG (remember what I said about duplication). My prediction is that for script-based data visualizations, at least, there’s almost no chance that people will provide a @longdesc, and a reasonably good chance they’ll provide discrete titles; for other uses of SVG that might need a longer description, there’s still a better chance that people will provide a <desc> than a @longdesc, because even with authoring challenges, it’s easier to enhance one file than create and maintain two interlinked files. As a pragmatic matter, I think the notion of using @longdesc with SVG is largely academic, and will be of decreasing importance going forward.

I’ve read claims that @longdesc is a requirement in some organizations and countries, by law or policy; if this is the case, I believe that SVG’s native capabilities should satisfy those requirements, and policies should be revised to address that, where applicable.

The more accessible SVG content is produced, the more compelling the case for native browser and AT support, and the better the experience for people who need an alternative for graphics. So, go make some great content, please!

]]>http://schepers.cc/authoring-accessible-svg/feed4It’s a Wrap!http://schepers.cc/its-a-wrap
http://schepers.cc/its-a-wrap#commentsFri, 31 May 2013 06:50:40 +0000http://schepers.cc/?p=557I’ve long been an advocate of wrapping (or “flowing” or “multi-line”) text in SVG. This is basic functionality, and people have been asking for it since SVG started.

I’m also an advocate for moving features out of SVG and into CSS, like gradients, animation, compositing, filters, and so on; the benefits of a single code base for HTML and SVG applications, and ease of learning and maintenance, are obvious.

So, I was excited when the CSS WG took up work to allow wrapping text to arbitrary shapes, like circles or stars, not just boxes. This will enable a whole new magazine-style layout that will benefit web sites and ebooks alike. But I was disappointed when I found out that this work had some recent setbacks and delays.

So, I’m calling for a simple solution now, while we wait for the more complete solution later. And the nice thing is, my solution also relies on CSS!

Text in SVG is text. That might seem obvious. But it means that it’s readable, selectable, indexible, searchable, findable, and accessible. Wrapping text might seem like the next logical step for a language that displays text. But believe it or not, about a decade ago, this was considered controversial! In response to SVG 1.2 adding text wrapping, some influential people complained about SVG “duplicating” functionality from HTML and CSS, arguing that SVG is a graphics language, not a text language, and that any line wrapping should only be done using embedded HTML fragments; they also opined (but could not back up with facts) that SVG’s line-wrapping was somehow in “conflict” with the line-wrapping of CSS. The reality was that SVG, being an XML format, was collateral damage in the incipient HTML5 vs. XHTML2 battle, but to appease those who were waging a Distributed Denial of Spec attack, we ultimately ended up removing text-wrapping from SVGT 1.2, deferring to SVG 2.

We didn’t realize how long it would be before we could start in earnest on SVG 2. And all those using SVG in the meantime have suffered not because of technological limitations, but because of politics.

People are forced to embed HTML in their SVG, or manually break the SVG into chunks and use the <tspan> element to specify each line, rather than letting the browser just make the break for them dynamically.

It’s time for that to end.

Simple Text Wrapping

The main use for text in SVG is labels and other short runs of text. We need to prioritize making wrapping and positioning such text easy-peasy.

Since wrapping text in boxes was always easy in SVG, and we were getting pushback about text-wrapping in general, the idea many years ago was to do fancy text-wrapping that you couldn’t do with HTML+CSS: wrap to arbitrary shapes, put text in an hourglass and animate the letters flowing down, make it easy to do the stuff it would be hard to do otherwise. So, the SVG Working Group focused on that. And now that that’s eventually going to be enabled through CSS, we can go back to basics, and meet the needs of people using SVG now.

For simple stuff. Like labels.

But how can we do this incredibly difficult task? What Herculean undertaking, what Rube Goldberg device can let us split a single line of text into multiple lines?

The width property (or attribute).

Stay with me, I know it’s hard to understand. By specifying the width property on a <text> element, I’m proposing that the text stops when it gets to that point, and that subsequent words are rendered on the next line, where the line-height is dictated by factors like the font, font-size, and line-height properties… you know, like in CSS.

Still fuzzy? Let me draw you a picture:Tomorrow, and tomorrow, and tomorrow; creeps in this petty pace from day to day, until the last syllable of recorded time. And all our yesterdays have lighted fools the way to dusty …death.

Really want a ”paragraph” to break the flow of some text into chunks? Use a <tspan> with a dy="1em" to create that break in the flow. Want to restrict the height, or use with vertical text? Use the height property. Want overflow ellipses? Use text-overflow: clip ellipsis.

I’m all for us also adding support for even more nifty CSS layout features like Flexbox, or even margins and padding… later.

Right now, I want text to wrap when I specify a width on a <text> element. Basic line-wrapping code already exists in all the major browsers, and SVG is supported in all those browsers. So, browser vendors, let’s see some prototypes! Here’s the code sample for you:

I tried implementing part of what you proposed for text wrapping in a rectangular area. I just did the width property bit, not the height property, and you can’t set them using presentation attributes… It also only works for simple length values, not percentages or calc() values… it shows that my quick patch has some problems… For one thing there is a blank line at the top of the rectangle… but try these [temporary] experimental builds out if you’re interested (you need to go to about:config and set the svg.text.css-frames.enabled pref to true):

This is just a rough first pass, but it’s a proof-of-concept that this should be pretty easy to do.

Yet Another Update: Both the SVG and CSS Working Groups have accepted my initial proposal, so the next step is to put it in the SVG 2 spec and incrementally improve it, then persuade implementers to release it in the next major revision of browsers!

(Pipe down, Boromir!)

]]>http://schepers.cc/its-a-wrap/feed1WebFonts in SVGhttp://schepers.cc/svg-webfonts
http://schepers.cc/svg-webfonts#commentsSun, 29 Jul 2012 20:24:15 +0000http://schepers.cc/?p=535Several times recently, people have asked on IRC how they can use nice fonts with SVG. My answer in the long-ago past, when Adobe’s excellent SVG Viewer plugin roamed the Earth, was to use SVG Fonts directly embedded in the SVG file… but that’s no longer practical with the current varied browser support.

By far the easiest way to do this today is to use webfonts, such as WOFF.

Authoring tools, like Inkscape and Adobe Illustrator, should simply manage this for authors, but until that happens, I thought I’d share a relatively simple workflow that has helped others. (Warning: I will use the words font and typeface with careless abandon to semantics in this article, though I know the difference. I’m afraid the more sensitive souls among you may suffer apoplectic righteous indignation.)

I will walk you through this workflow step-by-step, but if you get lost, just look at the source code… it’s pretty self-explanatory.

Find a font you like. For this example, I chose Riesling, an Art Deco typeface by “Bright Ideas”. I made sure the font was free and didn’t have any restrictions for online or commercial use.

Upload the font to FontSquirrel’s @font-face generator, and download the resulting package. This contains the webfonts, as well as some sample files (though not an example of how to use the files in SVG).

Copy the resulting webfont files (*.eot, *.ttf, *.woff, *.svg) into the same directory as the SVG file that will reference them. (Note: do not confuse the SVG file in the FontSquirrel package for a content file; it is an SVG font, with no rendering content.)

Copy the CSS @font-face style rule from the FontSquirrel package file (“stylesheet.css”), and put it in your SVG’s <style> element.

Optionally, for local testing, install the original TTF file as a system font, and add the local value to the @font-face style rule

Add a style rule for text elements, using the resulting font-family. In my example, I also created a style rule for textPath elements.

A Word on the Future of SVG Fonts

Since I’ve mentioned SVG Fonts a couple of times, I thought I’d leave you with a final note about what the future seems to hold for them.

SVG Fonts were added in SVG 1.0, way back in 1999, as a way to embed fonts in the same file, using vectors and all the capabilities of SVG, including separate fill and stroke (which could be gradients or other paint sources), and even exotic features like clip-paths, animation, and so on; Jérémie Patonnier has a great article on the topic. As cool as this is, it doesn’t necessarily work well with existing font engines… The Adobe SVG Viewer had support for SVG Fonts, and Opera and WebKit added support for the less-powerful SVG Tiny 1.2 subset as well; but Mozilla and Microsoft decided not to add support for SVG Fonts, and that limits their usefulness. It doesn’t seem likely that SVG2 (being developed now) will include SVG Fonts, not necessarily even the more limited subset (though there is not yet consensus in the SVG Working Group).

In practical usage, SVG Fonts were once the only way to use webfonts on iOS devices (e.g. the iPhone and iPad), which is why FontSquirrel includes them in their font-face package; but this is no longer the case, and iOS 4.2 added support for TTF fonts, which reportedly perform better. I expect this use of SVG Fonts to dwindle away… people will naturally want to use as few font formats as possible.

But for all of you out there who, like me, love the cool things you can do with SVG Fonts that aren’t possible in traditional outline font formats, don’t despair! Even as I write this, a proposal is being prepared by the SVG glyphs for OpenType Community Group to add a subset of SVG to OpenType, for more interesting typeface capabilities.

SVG Fonts are dead! Long live SVG Fonts!

]]>http://schepers.cc/svg-webfonts/feed3Divide and Conquerhttp://schepers.cc/divide-and-conquer
http://schepers.cc/divide-and-conquer#commentsSun, 09 Oct 2011 06:58:25 +0000http://schepers.cc/?p=522I have Libertarian friends who think Ron Paul has a chance at the GOP nomination… My intuition is that they are engaging in wishful thinking. My best guess is Romney will take it, but I’m hoping for Cain, for 2 reasons:

It would be kind of awesome to have 2 black candidates for President of the United States; and

I like the idea of Obama going up against McCain and then Cain… it would confuse future schoolchildren.

But should my guess prove correct, and Paul lose the Republican nomination, where would that leave him? He’s garnered quite a lot of support in some polls, and that might encourage him enough to consider splitting off again to run as an independent. After all, he is 76 years old, and may not have that many more chances to run (though he’s pretty spry), so he may as well throw it all in the ring for 2012. (Why independent and not Libertarian? He’s already got the Libertarian vote, and independent status might get him a few people who wouldn’t vote strict Libertarian… it’s a safer label.)

I would love this.

It would split the conservative vote between the Republican nominee (let’s say Romney) and Paul, and hand another term to Obama. It’s not that I think Obama is perfect, far from it; he’s too conservative or moderate for my taste, and I don’t like his persisting in these ridiculous wars, failing to revoke the expansion of Presidential powers, re-signing the PATRIOT act, failing to hold banks accountable, and many other weak moves. But I still support him in general, and I think he’s doing a pretty good job under very trying circumstances not of his making. I support Obama’s jobs bill, and raising taxes on millionaires; I even support him raising taxes on me, and I’m solidly middle class. And I think we need another Obama term to see his efforts really bear fruit, and to give the economy a chance to recover.

But that’s not what excites me about the idea of a Ron Paul schism from the GOP. With his seeming popularity among an increasing number of people, and the coming ugly break-up of the Tea Party and Republicans, and similar disillusionment on the left, it might increase the dissatisfaction with our two-party duopoly, and pave the way for voting reform where it is easier for multiple parties to credibly run for President.

The current system is a crock, where no matter if a Republican or a Democrat wins, the Republicans and Democrats win, and both parties have too much incentive to keep this convenient arrangement that neither will work to reform it, and neither will stray too far from the policies of the other. This race to the bottom is driving us away from real issues and further and further to the right (which is not right).

The simple fact of the matter is that a country of over 300,000,000 people must by necessity have more than 2 opinions on any given subject, so 2 parties can’t really represent their views with any degree of accuracy or subtlety. So, as a result, the “national dialog” consists of hand-picked non-topics that are blown out of proportion by the media, because that’s the only thing that the established parties will discuss. We need more diversity of voices in Washington; for that, I appreciate Ron Paul actually having the nerve to say things that push the boundaries of what Republican voters might want to hear (even if I think his particular stances are nonsensical sophistry or cynical manipulations).

And worse yet, this false black-or-white, uncompromising, your-team-vs-my-team bullshit isn’t just wasting our time and money, it’s actively driving a wedge between Americans, radically polarizing us into knee-jerk offense and defense, resulting in a fruitless stalemate. When I talk one-on-one to self-identifying Republicans, and get past the rhetoric to the real issues and real values, I usually find we think fairly similarly… we mostly disagree on execution of those ideas, and on personalities of the politicians; but I know that once we leave that personal, human conversation, the media machine will hammer at us relentlessly until we forget that point of similarity and solidarity. We too easily devolve into slogans and talking points, reaction against the banality of the opposing politicians’ public statements and stances.

We need more than two political parties, to baffle that echo chamber. We need new voices diverging and disagreeing, then coming to compromise, then pushing new ideas into the political arena, and that won’t happen when it’s just two talking heads shouting each other down. But right now, it takes a tremendous effort and a fortune to try to get on the ballot in each state; state-controlled ballot access means that even if a third party could get enough momentum and campaign donations to have a real shot, they blow that money and energy just trying to get on the ballot, while the Democrat and Republican candidates use their money to influence voters directly. This is only one of many reasons why “States’ Rights” advocates are dangerously naive; issues like this, which affect national elections, should be decided at the federal level.

Of course, lowering the bar for additional parties to be on the ballot, as difficult as it might be, is only one step. Even when they are there, the best they can do is deflect votes from one of the other candidates, as Nader did from Gore in 2000, and as I hope Paul does from Romney in 2012 (though honestly, I think could just barely live with Romney as President… just not Paul, Cain, Sanctimonious, Perry, or that one lady with the creepy crazy eyes… no, no, the other one).

What we would also need is the ability for people to make real and direct choices in who they want as President (and for Congress)… not simply pick the lesser of two evils, but rank them according to how well that candidate matches their own preferences, how well each candidate represents that voter’s views. This is nothing new… we’re one of the last first-world nations that still uses the first-past-the-post, winner-takes-all ballot system, when what we need is a more sophisticated ranked voting system, where if your first choice of candidate doesn’t have enough votes to win, your vote is counted instead for your next choice, then on down the line, until one candidate is the candidate that the most people can live with. Then, an alternative party candidate might actually stand a chance of winning… and even if they didn’t, it would disrupt the carefully-controlled unequilibrium that the duopoly enjoys, so we could at least hear those new voices.

But could this happen? Could a national referendum on voting reform actually bloom? Well, I can imagine a scenario where it might. Obama is limited to one more term, and I have a feeling that he’s going to be more aggressive during his last term… he has less to lose. So, he could well support such an effort, on the basis that if a Republican wins in 2016, they will try to dismantle his reforms… so he’s better off trying to widen the field, on the chance that a it would yield a less clear “mandate” for destroying what he had built just to score one more point in political pong.

Right now, the duopoly strategy against us is winning: they divide and conquer. It’s a proven principle that we need to apply the in reverse: we need to divide ourselves into many parties, so we can conquer them.