Menu Close

Category: Technology

The seven deadly sins of website accessibility

In defence of website accessibility

I’ve worked on New Zealand government websites in one capacity or another for almost 10 years. One of the central pillars of managing these websites has been maintaining their accessibility — and sometimes this can be a pretty challenging task, mainly because people who don’t work in the web space often don’t know what website accessibility even is, or why it’s important.

I used to explain website accessibility in terms of making sure that people with sensory, motion, or cognitive challenges were able to get the same level of access to website information as people who didn’t experience those challenges. “A blind user using VoiceOver on their iPhone should be able to access our website just as easily as a sighted user on a desktop PC.” However, I would often get pushback from people when I explained it in these terms, with some people insisting that the percentage of these users must be so small that it doesn’t matter, and can we please just get that 50-page scanned PDF up on the website without further discussion, it’s urgent.

whakatane.govt.nz - screenshot of old version
The 2009 to 2013 version of whakatane.govt.nz got an above average accessibility score, but there was room for improvement.
whakatane.govt.nz website screenshot - new version.
My redesign of whakatane.govt.nz scored higher on accessibility than any other local government website in New Zealand.

New Zealand government web standards make it pretty easy to counter that argument, because you are required to maintain a certain level of website accessibility for these sites. It’s not optional.

The government doesn’t go into a whole lot of detail about what happens if and when they find out your website isn’t meeting the standard, but I imagine the results wouldn’t be all that favourable to the people responsible for maintaining any site that falls afoul of an audit.

Partially because of that, but mostly because I think the web should be a democratised space where everyone gets equal access to information, I’ve made it my business to educate myself and others about website accessibility as much as possible. I’ve also been fiercely protective of maintaining the standards on any website I’ve worked on — and not just government websites — even if it leads to conflict with people who insist it can’t be as important as all that.

Because it is important. The reality is that maintaining high standards for website accessibility doesn’t just make the website better for blind users, or deaf users, or people navigating the website by voice or eye tracking, or people with cognitive difficulties. Making websites more accessible makes them better for everyone.

What “website accessibility” means

WCAG guidelines go into great detail on what website accessibility actually means and what the standards are. The bottom line, too-long-didn’t-read summary of the guidelines is that they deal with how to design and maintain websites and website content so they serve the needs of three broad categories of users.

People with sensory challenges

Because the web is a primarily visual medium, people usually focus on how to maintain website accessibility for blind users. That’s good, but it’s only a single facet of the sensory difficulties users may experience. Deaf users can’t hear what’s being said in a video. Older users may be able to see but may also have any number of age-related visual deficits. Colourblind people can’t tell that the important text on your page is important, because the only thing denoting it as being important is that the text is red. Even people with otherwise normal sight can have bad days — my eyes refused to focus for the first couple of hours after I woke up this morning.

Maintaining accessibility for people with sensory challenges is where most of the focus and effort seems to go, and for good reason — because it’s arguably the easiest thing to get wrong.

People with motion challenges

The web is also a medium that demands some kind of user motion to interact with it: clicking or tapping buttons and links, scrolling through pages, typing to fill out forms. For people who have difficulty making these kinds of motions, or who just flat out can’t move at all, trying to interact with websites that aren’t designed with them in mind can range from frustrating to outright impossible.

The main focus in this area again seems to be on the most greatly challenged users — paraplegics, for example. But motion challenges can vary just as widely as sensory challenges, ranging from an elderly person with Parkinson’s disease to a person missing several fingers. I find using mouses and trackpads somewhat challenging on Windows devices, because the pointer acceleration profile on Windows PCs tends to absolutely suck compared to the profile used on Macs. This means I rely on keyboard navigation much more heavily when I’m on a PC, but lots of websites make keyboard navigability way harder than it has to be.

For the most part, addressing the needs of people with motion challenges happens at the design and build stage of the website, and isn’t necessarily something that web content authors and maintainers need to focus on as much as the other two categories. That said, it’s a good thing to have in the back of your mind anyway.

People with cognitive challenges

Back when I said maintaining accessibility for people with sensory challenges was arguably the easiest thing to get wrong, I might have spoken too soon. Because providing information for people with cognitive challenges is usually where websites end up getting it very, very wrong.

Cognitive challenges span a wide spectrum. It can mean someone has a developmental or learning disorder, but it may also mean something as simple as a user who speaks English as a second, third (etc) language. A person who doesn’t speak English natively doesn’t have an actual cognitive deficit, but they will still have more difficulty understanding complex information.

People experiencing chronic or acute stress (such as during an emergency) may also have difficulties understanding poorly-optimised written content. A new mother whose newborn hasn’t allowed her a full night’s sleep in a month. Someone trying to find information on CPR when their partner is unresponsive and paramedics are still 20 minutes away. An elderly man who has just been diagnosed with COVID and fears it may be a death sentence. All of these people are going to have roadblocks to their reading comprehension that they might not have under better circumstances.

Government departments try to get this right when they produce content — well, sometimes they do. Unfortunately, governments love their jargon, and abbreviations, and long-winded technical documents written more in legalese than in plain English. 

“Plain English” is the key phrase here, and the easiest way to make sure written content is readable for the greatest number of people. The NZ government has recently made “plain English” a legal requirement for government-produced content, but it remains to be seen how vigorous they’ll be in enforcing the standard.

Seven common web accessibility mistakes — and how to avoid them

The WCAG guidelines go into exhaustive detail about every aspect of web accessibility and how to adhere to the recommended standards. But over the past 10 years, I have seen seven very common mistakes that people make when producing web content, and fixing these issues will go a long way toward making your information more accessible to everyone.

1. Inaccessible documents

This is by far the most common issue I have seen when maintaining web content for government websites. The scenario is usually something along these lines:

  1. The organisation’s legal team sends through a PDF that absolutely has to go on the website right now.
  2. Because a Very Important Person signed the document, and because the legal team doesn’t know how to produce an accessible PDF containing a signature, they have scanned the printed document and emailed that to you.
  3. Since the document is just a scanned version, from a digital perspective it’s no better than a fax, or a screenshot of text. It contains no machine-readable text, and therefore the document fails to meet the most basic level of accessibility standards.

I have had to send these types of documents back to their owners, with a brief explanation for why they can’t be used online and how they can produce an accessible alternative, literally hundreds (and possibly even thousands) of times over the past 10 years. There are two easy ways around this.

  1. Use Adobe Acrobat Sign or a similar app to set up digital signatures for documents; or
  2. Omit signatures from documents and upload the unsigned version — so long as the physical copy of the signed version exists somewhere, you are legally covered.

Considering how easy Adobe and other software has made it to generate accessible PDFs, even with digital signatures, there’s really no good excuse for the “let’s scan this printout at the copier and email that to the Web Team” thing anymore. There is of course a broader discussion to be had around whether information should be shoved into PDFs and Word documents at all, because it’s far easier to make that information properly accessible if it’s actual text on a webpage rather than an attached document. However, at least in some circles (government especially) it’s often impractical to take the text-on-webpage approach. The bottom line here is that if you feel like you absolutely have to have your information buried in a PDF, the very least you can do is make that PDF actually readable.

2. “Click here”

The words “click here” are the bane of my existence. People who write links using “click here” or even just “here” are doing a serious disservice to website visitors, and they don’t even know it. A website link should give you some idea of what it will do and where it will take you before you click it. Consider the following two options:

  1. Click here
  2. View Rick Astley – “Never Gonna Give You Up” – YouTube video

Both of those links go to the same place. The difference is the first one doesn’t give you any context at all for what the link will do, while the second one tells you exactly what to expect (and therefore, you know not to click on it if you don’t want to).

It’s not enough to have the context for the link somewhere outside of the link, either. Plenty of people think this is just fine:

  • To view Rick Astley’s music video for “Never Gonna Give You Up”, click here

To explain why this isn’t an acceptable way to write a link, I have to put you in the shoes of a blind user navigating your website. Screen reader software usually comes with a feature that identifies web page headings (more about those later) and links on the page. This acts as a shortcut for the user and allows them to skip between headings and links without having to wade through all the intervening text first.

What’s this person’s device going to say to them when they come across that link in the example sentence from earlier? “Click here.” That’s it. No context at all for them. Worse, if you’re like a lot of web content authors I’ve seen, you’ll have the words “click here” or “here” peppered throughout your text multiple times, so all this poor user is going to hear when they navigate through your page links is “click here” “here” “click here” “click here” — incredibly frustrating.

Telling people to “click here” also ignores the basic fact that a lot of your users — probably more than 2/3 of them — are going to be on a smartphone when they visit your site. They can’t “click” anything. They’ll understand what to do anyway — probably? — but you should avoid telling them to do something they fundamentally cannot do on the device they’re using.

The easiest way to get around this problem is to write the text in your links as though they were the only thing on the page. Imagine that all other text on the page went away, and all you were left with was the blue (or other colour) text with the underline beneath it, directing the user to do something or go somewhere. Your job is to make sure they understand exactly what that link does, without any other help outside of the link itself. Write your links as a short action phrase: “Go to (the name of the website)” or “Read (the name of the document)”.

3. Not using headings (or using them wrong)

Remember when I said blind users have screenreaders that can skip between headings? Do you know what happens for these users when you’ve used bold text for headings instead of actual h2, h3, or h4 heading text? You probably don’t know, but now you will: the screenreader doesn’t identify any headings, so it acts like there aren’t any. So the user can’t skip between headings, can’t identify the overall structure of the page at a glance, and has to wait for the screenreader to read out potentially almost the entire page before getting to the content they actually need.

Not using properly marked-up headings also means Google’s web crawler has a harder time identifying what your page is supposed to be about, which can cause your content to take a big hit on search engine optimisation, meaning people won’t find it as easily as they could have.

Another thing I see people doing is using headings stylistically and skipping the styles they don’t like. They’ll skip from h2 all the way down to h5, then go up to h3 for some reason, then back to h5… when you do this, what you’re telling both screenreaders and Google is to present the content in an order of hierarchy entirely different from what you might intend. If you don’t understand what I mean by that, the simpler version is that there’s a right and wrong way to use headings:

  • h1 is the top-level heading for the page. This should be your page title. There shouldn’t be another h1 anywhere on the page. Most website CMS editors will let you use another h1 if you want to — but don’t.
  • h2 is the actual top-level heading for the page content. This should be the first heading type you use when you want to carve up your page content into smaller chunks. If you think of your page as a book, and h1 as the title of that book, then h2 headings are the titles of your book’s chapters.
  • h3 is a subheading under h2. This is a subsection within an h2 section.
  • h4 is a subheading under h3. This is a sub-subsection within an h3 section.
  • …and so on, from h5 down to h6.

The key here is not to skip around randomly from h2 to h4, or skip h2 altogether and go straight to h3. Headings do more than make text bigger and bolder — they tell software what the overall structure of the page actually is, like building a table of contents for a book. If you use headings non-hierarchically, you’ll confuse the software, and you’ll confuse your users — and Google. You don’t want that.

4. Jargon, jargon, and more jargon

If you’re writing for a specific kind of audience that already understands the language of whatever industry or field you’re presenting information for, then fine — go absolutely nuts with the jargon, highly technical language, and abbreviations. If you’re writing for a general audience, don’t do that. Use plain English wherever possible. Explain technical terms the first time they occur in your text. Spell out abbreviations the first time they show up, then use the abbreviation after that. Simplify your language as much as you can without losing the essential meaning of what you’re trying to say.

Some organisations try to fulfil this objective by producing “Easy Read” documents that use much simpler language and even diagrams to get the basic meaning across for people that might not be able to understand the content otherwise. This is a good approach, although it does mean creating and maintaining two entirely separate versions of the same content, which can get unwieldy very quickly. It’s perhaps better to take the opposite approach: make the text on your page as basic and widely-accessible as possible, and offer the more detailed, more technical version separately for the subset of your audience that needs this more detailed (and possibly more jargon-filled) information.

5. Insufficient colour contrast between foreground and background

I usually see this in website design or graphic design more than in web content, but it’s still a problem I see way more often than necessary. Having insufficient contrast between elements in the foreground (text, usually) and the background makes it very difficult to impossible for some people to read. If you were around in the early days of the web, you might remember people’s personal websites had an often, um, creative approach to the colours they’d use — purple text on a lime green background is one example that still haunts me 25 years later.

There’s a good reason most websites have a white background with black text (or vice versa): that’s the widest possible contrast between two colours, and therefore the easiest way for our eyes (and brains) to distinguish between two elements laid on top of one another. If you have light grey text over a dark grey background, or orange text over a yellow background, people are going to have a much harder time seeing it.

There are several online tools that can evaluate colour contrast and let you know whether the difference between foreground and background colours is good enough for most (if not necessarily all) people. The one I usually use is the snook.ca Colour Contrast Check, though there are many others that do the same thing.

6. No ALT text on images

You’ve put a picture of a dog standing on a beach in front of a glorious sunset on your website. Almost everyone will see this cool picture — but not everyone. Blind users with a screenreader will come across this image, and if you haven’t put in what’s known as ALT text, their screenreader will do one of two things: skip right over the image, or worse, read out the image file name. Google’s web crawler will come across this image, but without ALT text to identify what it is, it won’t know what to make of it, and it might not show up in search results.

Here’s a worse example that I’ve seen way too many times in the government web space: You have a neat graphic of a complex workflow or a statistical graph that, for whatever reason, you can’t provide in any format other than as an image. So for anyone who can’t see the image without the help of a screenreader, all the text that’s inside that image might as well not even exist if you haven’t put in any ALT text to provide context for them.

The easiest way around this is to just not use images at all, but that’s either no fun (in the case of dog pictures) or impractical (for graphs and similar content). The next easiest way around the problem is to use ALT text with every image you put on your website, if that image is supposed to convey something other than pure decoration. Use just enough text that a blind user (or Google) can identify the essential purpose of the image (or whatever text the image might contain). You don’t have to go crazy and write a novella-sized description — in most cases, a sentence or two is more than enough.

7. No captions on videos

I don’t know how many times someone in a government department has sent through a video that had to go up today, urgently, but their video didn’t contain captions and couldn’t be published until it did. Videos without captions don’t meet accessibility guidelines for the same reason that photos without ALT text don’t either: for a statistically significant part of your audience, a video without captions won’t do them any good at all.

Usually we talk about deaf users when we talk about people needing captions for videos, but it goes beyond that — they can be very helpful for people who don’t have English as their first language, or for people who have trouble understanding whatever accent people are speaking in throughout the video (imagine someone from the USA trying to understand a video from Scotland without captions, or someone from China trying to understand New Zealanders). People may also be watching your videos in an environment where they can’t hear very well (like on a train or bus) or they may be somewhere that they don’t want sound playing but still want to get the information in the video (trying not to wake up their roommate, or not wanting everyone in the house to know they’re watching a video about getting treatment for an STD).

Relying on YouTube’s automatically-generated captions is pretty much a recipe for disaster. They do an okay-ish job when the speaker has an American accent and isn’t using any slang or technical terms, but if (as just one example) your speaker has a New Zealand accent and is using a lot of Māori language, YouTube is going to absolutely mangle the captions more often than not.

Creating captions isn’t difficult — you literally just have to type out what people are saying into a plain text document and upload that alongside your video. It can be pretty time-consuming, however. 

Why you should care about accessibility

Beyond just wanting to make sure that your website’s content can get out to as many people as possible, there’s a very good reason to want every website to be as accessible as it can be: someday, you yourself will need it to be. You will get older, and your eyesight will start to fail. Your hands might start to shake. You might have trouble understanding words and concepts that used to come easily to you. On a long enough timeline, you will need websites to have a basic level of accessibility if you want to be able to use them at all. You might not be blind, deaf, paralysed, or cognitively impaired to a profound degree, but at some point you will be old enough that reading tiny text, tapping tiny buttons, and understanding lengthy jargon-filled sentences will be difficult. If you can’t find it in yourself to make your website accessible for everyone else, do yourself a favour and make it accessible for the future version of you.

Breaking: Twitter still a cesspool, water still wet

I joined Twitter in 2009, mainly to see what all the hype was about. I liked the vibe of the place, and the fact that I could use third-party apps to post things there was also a definite bonus. Not long after I started writing for The Unofficial Apple Weblog in mid-2009, Twitter became the main medium by which I interacted with readers. One of those readers lived in the same town in NZ that I did, and he liked my work enough that he offered me a real-life, real-world job. Every job that I’ve held since then has been built off the foundations of that first one.

It’s therefore not hyperbole to say that I got where I am today because of Twitter. Without it, I wouldn’t have had anything like the career I enjoyed between 2012 and 2023. Hell, I probably wouldn’t still be in New Zealand if it weren’t for the career opportunities that opened up for me because of just a single early 2012 tweet from my NZ-based follower, who put out a call for any writers interested in a position…

After TUAW got shut down in 2015, I slowly used Twitter less and less actively. I still doomscrolled through it every day, just a casual observer of how the world slowly went down the tubes over the next few years, watching with equal bits horror and curiosity. As the world became the equivalent of a gruesome car wreck that you didn’t want to see, but simply could not turn away from, so too did Twitter.

As Twitter increasingly became a weaponised platform, I became increasingly disinterested in engaging with it. I don’t watch Fox News for a very good reason, and by 2018 Twitter was starting to feel very much as though it had become the Fox News of social media. As far-right radicals (including the President of the United States) amped up their use of the platform to spread messages of hatred, division, and disinformation, I hoped — and so did a lot of other people — that the people running Twitter would finally step in with some much-needed moderation.

Cancelled
I wish I had more than one of these stickers.

No such luck. Jack Dorsey, one of the guys who founded Twitter and who was still (kind of) running it, responded to calls for increased moderation with a bunch of tech-bro libertarianism and “both sides are equally bad” arguments. Whether he intended it or not, what I took away from his response to the influx of literal goddamned Nazis (LGNs) on Twitter was that, instead of wielding a banhammer like it was Mjolnir, Dorsey (and Twitter) were stroking their beards and furrowing their brows and saying, “Wait, wait: let’s hear them out.”

I knew then that it was time to jump off the burning oil rig. I could not in good conscience continue to support, with either my time or attention, a platform that allowed LGNs free rein to say whatever the hell they wanted, or to allow the unhinged President to use the same platform to spread malicious lies and disinformation like it was going out of style (Narrator: It was not going out of style).

I’d seen before what happens when a social media enclave is (mis)managed by a laissez-faire site owner who styles himself as a “free-speech absolutist” who expects the “marketplace of ideas” to condemn and ultimately negate any genuinely harmful or disgusting rhetoric. It’s the sociolinguistic equivalent of the right-wing mantra against the concept of economic regulation: “let the free market decide”. What happened in this earlier case — a web-based community of mostly like-minded people that pre-dated the very concept of social media itself — was that the trolls slowly and insidiously took the site over, drove out all the old regulars, and left the place a digital ghost town with nothing more than a single far-right nutjob shrieking conspiracy theories into otherwise empty online discussion boards.

In other words, the consequence of “free speech absolutism” on that website was — shockingly — that psychopaths with copious free time and zero social inhibitions or shame became the loudest voices in the room and drove everyone else away. 

I could see the same thing happening to Twitter, so it was in the middle of either 2018 or 2019 — I can’t remember which, but long ago enough now that it doesn’t matter — that I deleted my account and bid Twitter a one-fingered adieu.

Finally, in January of 2021 Trump did something that not even the pseudo-libertarian tech bros running Twitter could forgive. Apparently, using the platform to incite an insurrection against the government was one step too far under Twitter’s terms of service. Or at least it used to be.

And now, as we’re all painfully aware, Elon Musk owns Twitter. And unlike the surface-level success he’s had at Tesla, or the admittedly impressive work that SpaceX and Starlink have done, Musk has done more harm to Twitter in less than a year than an army of dedicated far-right trolls could have done in five.

I got ample evidence of Twitter’s further descent into poo-flinging madness earlier today. I was directed via Daring Fireball to a Tweet thread about “social authoritarianism”. I’m not going to go too deeply into what I think about what the author of that thread had to say, other than that I disagree with his conclusions primarily because they smell an awful lot like the “free speech absolutism” peddled by Musk, Dorsey, and the unnamed lazy site owner of that unnamed website from the deep woods of my past. (One of the replies to this thread even bloviates about the “marketplace of ideas” concept, which… heh. Thanks for proving my point, “libertarian” dudebro.)

Nah, what really drove home to me how much of a complete cesspool Twitter has become was the “More tweets” thing below that thread, which was just wall-to-wall right-wing bullshit. Some negative meme about Greta Thunberg. Kevin McCarthy wailing about Trump’s (supposedly) impending arrest being a miscarriage of justice. At least four separate “this guy from the hood OWNS Fauci when he goes door-to-door peddling the jab” tweets. At least three openly anti-trans tweets.

If that’s what counts as a “marketplace of ideas”, I have a better idea: nope. Your marketplace sucks, and I’m taking my attention — the real currency of the vaunted “marketplace of ideas” — elsewhere.

“So much for the tolerant left!” Well, yeah. I’m intolerant of intolerance. That isn’t the “gotcha” moment that people on the far right think it is. And as for the libertarian dudebros who want to use the same “let the free market decide” argument as capitalist oligarchs, I wonder how long their insistence on free-speech absolutism would hold up if they, like me in early 2014, had to spend the worse part of a month dealing with an onslaught of harassers and trolls sicced on them by a prominent tech bro who didn’t like being publicly called out on his BS. How much value does your “marketplace of ideas” place on a dude who has nothing better to do for a full month than send an ongoing torrent of harassing tweets and emails, some of them threatening to end your career, others threatening to end your life? And this happened in 2014. I can only imagine how much worse it would be today.

Like so much of the pre-2016 world, I miss what Twitter used to be. I owe a lot of who I am today to the Twitter of those days. But that Twitter is long gone, and the twitching, shambling thing that bears its name today needs to be put out of our collective misery.

Conversations with ChatGPT: Motorcycle crash

I had the following conversation with ChatGPT about my November 2021 motorcycle accident. I have edited its responses to remove some redundancy.

Estimate the distance travelled by a person wearing a leather jacket and cotton jeans, sliding over a chipseal road surface, when their starting velocity was 100 kph.

To estimate the distance travelled by a person wearing a leather jacket and cotton jeans sliding over a chipseal road surface when their starting velocity was 100 kph, we need to make some assumptions and use some physics equations.

 

A wrecked BMW S1000R motorcycle.
The motorcycle slid 100 metres down the road before spinning 180 degrees and sliding into a ditch. It seemed okay at first…
BMW S1000R, upright in a ditch.
After hauling the motorcycle upright, the extensive damage became apparent.

Assumptions:

  • The person is sliding horizontally with no vertical motion.
  • The person is in a stable position and not tumbling or rotating.
  • The coefficient of kinetic friction between the leather jacket/cotton jeans and the chipseal road surface is 0.7 (a reasonable estimate for dry asphalt).

Using the equation for the distance traveled during constant acceleration, which is:

d = (v^2 – u^2) / (2a)

 

where:

  • d is the distance traveled
  • v is the final velocity (which we assume is 0, since the person eventually stops)
  • u is the initial velocity (100 kph, which needs to be converted to m/s)
  • a is the acceleration (which we can calculate using the coefficient of friction and the gravitational acceleration)

Converting the initial velocity to m/s:

100 kph = 27.8 m/s

Calculating the acceleration using the coefficient of friction and gravitational acceleration:

a = μg

a = 0.7 * 9.81 m/s^2

a = 6.87 m/s^2

Plugging in the values:

d = (0^2 – 27.8^2) / (2 * 6.87)

d = 223.5 meters

Therefore, the distance traveled by a person wearing a leather jacket and cotton jeans sliding over a chipseal road surface when their starting velocity was 100 kph is approximately 223.5 meters.

If I only slid 80 metres in the above scenario, what is the likely reason?

If the person only slid 80 meters in the above scenario where their starting velocity was 100 kph and assuming the same conditions and assumptions, there are several possible reasons for this:

  1. The coefficient of kinetic friction between the leather jacket/cotton jeans and the chipseal road surface might be lower than the assumed value of 0.7. This could be due to various factors, such as the road surface being wet or oily, or the clothing materials being worn or damaged.
  2. The person might have encountered an obstacle or a change in the road surface that caused them to slow down or stop earlier than expected. For example, there could have been a pothole, a bump, or a change in the road gradient that caused the person to lose momentum.
  3. The person might have tried to slow down or stop themselves by applying some form of friction or resistance, such as using their hands or feet, which would have reduced their sliding distance. This could be a deliberate action, such as attempting to slow down and stop as quickly as possible, or an involuntary reflex, such as putting out a hand to break a fall.
  4. There might be errors or inaccuracies in the initial assumptions or calculations. For example, the person’s actual mass and surface area in contact with the road surface could be different from the assumed values, or there could be other factors affecting the sliding distance that were not considered in the original analysis.

In any case, it is important to note that the calculation of the sliding distance is based on several assumptions and simplifications, and the actual distance traveled by a person sliding on a road surface can vary significantly depending on the specific conditions and circumstances.

 

A thick oil slick on a wet road.
A thick oil spill over a wet road, which caused total loss of friction and control.
An x-ray of shoulder and ribs.
X-ray showing ribs healed from an accident nine years earlier, but no new damage.

How much force is needed to break a person’s ribs?

The amount of force needed to break a person’s ribs can vary widely depending on various factors, such as the person’s age, sex, health status, and the angle and location of the impact. Generally, ribs are more susceptible to fractures from a direct blow or compression, and the force required to cause a rib fracture can range from as little as 4 to 5 kgf (kilograms of force) to more than 900 kgf.

In general, a force of 300 kgf or more can cause a rib fracture in most individuals, with some sources citing a range of 200 to 350 kgf.

 

Estimate the force of a 100 kg person impacting a road surface after falling off a motorcycle – the angle of impact vs the direction of travel is approximately 45 degrees in the x axis (in other words, at a 45 degree angle to the direction of travel).

To estimate the force of a 100 kg person impacting a road surface after falling off a motorcycle at a 45-degree angle to the direction of travel, we can use the principles of Newtonian mechanics.

Assuming that the person is traveling at a constant speed before falling off the motorcycle, the force of impact will depend on the person’s velocity and the angle of impact. Let’s assume that the person is traveling at 20 m/s (approximately 72 km/h) and impacts the road surface at a 45-degree angle to the direction of travel.

We can break down the force of impact into two components: one perpendicular to the road surface (normal force) and one parallel to the road surface (tangential force). The normal force is equal to the person’s weight, which is 100 kg multiplied by the acceleration due to gravity, which is approximately 9.81 m/s^2, or 981 N. The tangential force is equal to the person’s momentum in the direction of travel at the moment of impact, which is 100 kg multiplied by 20 m/s multiplied by the sine of 45 degrees (since the angle of impact is 45 degrees to the direction of travel), or approximately 1414 N.

The total force of impact is equal to the vector sum of these two components, which we can calculate using the Pythagorean theorem:

force = sqrt(normal force^2 + tangential force^2)

force = sqrt((981 N)^2 + (1414 N)^2)

force = sqrt(1974061 N^2)

force = 1405 N (approximately)

Therefore, the force of a 100 kg person impacting a road surface after falling off a motorcycle at a 45-degree angle to the direction of travel and traveling at 20 m/s is estimated to be approximately 1405 N.

Recalculate the above for an initial velocity of 100 kph.

Sure! Let’s recalculate the force of impact for an initial velocity of 100 km/h, which is equivalent to approximately 27.8 m/s.

Assuming that the person impacts the road surface at the same 45-degree angle to the direction of travel, we can use the same principles of Newtonian mechanics to calculate the force of impact.

The normal force, which is equal to the person’s weight, is still 100 kg multiplied by the acceleration due to gravity, or approximately 981 N.

The tangential force, which is equal to the person’s momentum in the direction of travel at the moment of impact, is now 100 kg multiplied by 27.8 m/s multiplied by the sine of 45 degrees, or approximately 1961 N.

Using the Pythagorean theorem to find the total force of impact:

force = sqrt(normal force^2 + tangential force^2)

force = sqrt((981 N)^2 + (1961 N)^2)

force = sqrt(5762042 N^2)

force = 2400 N (approximately)

Therefore, the force of a 100 kg person impacting a road surface after falling off a motorcycle at a 45-degree angle to the direction of travel and traveling at 100 km/h is estimated to be approximately 2400 N.

Is that sufficient force to break bone?

Yes, a force of 2400 N is sufficient to break bones, including ribs, depending on the location and angle of impact, the age and health of the person, and other factors.

Bone fractures can occur due to a variety of forces, and the amount of force required to cause a fracture can vary widely depending on the bone in question and the specific conditions of the impact. However, a force of 2400 N can certainly cause bone fractures, particularly if the impact is focused on a small area of the body, such as the ribs or the pelvis.

How much force mitigation does D3O armour provide?

D3O is a brand of smart material that is used in a variety of impact protection applications, including motorcycle armor, sports protective gear, and phone cases. The unique properties of D3O are its ability to be flexible and pliable in normal use, but to instantly harden and absorb impact energy when subjected to sudden shocks or impacts.

According to the D3O website, the material is capable of reducing the amount of force transmitted by up to 50% compared to traditional foam-based materials. However, the specific level of protection provided by D3O armor can vary depending on the specific design and application of the armor, as well as the nature and severity of the impact.

How much force reduction does “traditional foam-based materials” used in motorcycle armour typically provide?

According to some studies and industry sources, foam-based armor can reduce the amount of force transmitted by up to 30% compared to unprotected impact areas. However, the specific level of protection provided by foam-based armor can vary widely depending on the specific type and quality of foam used, as well as the nature and severity of the impact.

If foam-based materials reduce force transmission by up to 30%, and D3O provides 50% more force production than that, what is the total amount of force reduction for a 2400 N impact.

If foam-based materials reduce force transmission by up to 30%, and D3O provides 50% more force reduction than that, the total amount of force reduction for a 2400 N impact with D3O armor would be:

  • Foam-based materials reduce force transmission by up to 30%, which means that they would reduce a 2400 N impact force by up to 30%, or by 720 N.
  • D3O provides 50% more force reduction than foam-based materials, which means that it would reduce a 2400 N impact force by an additional 50% of the 30% reduced by foam-based materials, or by 360 N.
  • Therefore, the total amount of force reduction for a 2400 N impact with D3O armor would be 720 N (from foam-based materials) + 360 N (from D3O) = 1080 N, or approximately 45% of the original impact force.

Does this reduce the force of a 2400 N impact low enough to reduce the risk of bone fracture?

A force reduction of 1080 N could certainly help to reduce the risk of bone fracture compared to an impact with the full 2400 N force. Additionally, protective gear like D3O armor can help to distribute impact forces more evenly across the body, reducing the risk of localized injuries to specific bones or body parts.

Assume the 2400 N impact took place on the shoulder (with D3O armour) and the hip (no armour). What are the likely outcomes from this impact?

Bsed on the scenario you described (shoulder impact with D3O armor, hip impact without armor), we can make some general predictions about the likely outcomes:

  • Shoulder impact with D3O armor: If the impact was absorbed by D3O armor on the shoulder, it is likely that the force of the impact would be significantly reduced, potentially reducing the risk of shoulder injury. However, it’s still possible that the wearer could experience some pain or discomfort in the shoulder area, particularly if the impact was at an angle or if the force was transmitted to other parts of the body. If the impact was severe enough, there is still a risk of shoulder injury, such as a fracture or dislocation.
  • Hip impact without armor: If the impact was not protected by any armor on the hip, the force of the impact would likely be transmitted directly to the bones and tissues in the hip area. This could result in a range of injuries, from bruises and contusions to more severe injuries such as fractures, dislocations, or soft tissue damage.

I survived the above scenario (impact against the road at 100 kph plus an 80 metre slide down the road) with only minor injuries after a motorcycle crash. Is that a statistically likely outcome of this type of crash?

It’s difficult to determine the statistical likelihood of any specific outcome in a motorcycle crash, as the severity of injuries can vary widely depending on many factors, including the specific location and direction of the impact, the age and physical condition of the rider, the presence of protective gear, and many other factors.

That being said, surviving a motorcycle crash with only minor injuries after impact at 100 kph and an 80-meter slide down the road is certainly a positive outcome, and it suggests that the rider was likely wearing appropriate protective gear and following safe riding practices.

Spinning wheels

Something occurred to me last night: I can’t think of any huge, earth-shaking, paradigm-shifting, society-disrupting technological shifts that have happened in the past 15 years. The very last instance I can think of where the introduction of a new technology cleanly defined our world into Before and After was in 2007 when the iPhone (and its subsequent legions of copycats) came out, and transformed not only the nature of computing, but the way we interact with one another (or fail to) on a global basis.

For most of the 20th century, world-changing tech dropped on a fairly regular basis. Automobiles. Powered flight. The assembly line. Antibiotics. Multiple vaccines. The discovery of DNA. Radio. Television. Nuclear fission power, nuclear fission bombs, nuclear fusion bombs. Jet engines, rockets, ICBMs. Manned spaceflight, landing on the Moon, space stations, space probes visiting every planet in the Solar System. Computing mainframes, personal computers, the internet, the worldwide web.

The spacecraft Discovery from 2001: A Space Odyssey.
We were supposed to be flying this to Jupiter in 2001. That was 21 years ago. We still don't have anything close to this.

But somehow, it feels like everything since 2007 has just been variations on existing themes. Remixes. Yes, we have reusable rockets via SpaceX now, but so did the Space Shuttle back in the 1980s. We’re going back to the Moon later this decade, but that’s just revisiting an achievement that was made, and subsequently abandoned, years before I was born. 4K TV is a remix of HD. Starlink is just satellite internet with the satellites in LEO instead of GEO. The fastest Wi-Fi today is an iteration of its predecessors, all the way back to the original. Same deal with 5G. The iPad is basically just a big iPhone. The Apple Watch is basically just a small iPhone. The iPhone 14 is just the original iPhone with a better CPU, screen, camera, etc.

I am oversimplifying, obviously, and I probably have a too narrow, reductive view of what counts as “revolutionary” when it comes to a new technological, engineering, or biomedical feat. But that still doesn’t stop me from feeling as though, societally, as a species, we hit an innovation wall 15 years ago and have resorted to iterative evolutionary improvements rather than revolutionary improvements.

Maybe it’s a problem of all the low-hanging fruit having been claimed already. For example, it’s easy to be blasé about the “miracle of flight” when you’ve never known a world without it, and the idea of getting on a plane and soaring through the air holds very few romantic notions and honestly sounds like a major pain in the ass instead (you mean I have to take off my shoes before I can enter this aluminium tube and go hurtling 10 kilometres above the earth at nearly the speed of sound, winding up on the other side of the planet in less than a day? Ugh, what bullshit). And how many people out there even know that we have had a continuous, uninterrupted human presence in low Earth orbit for almost this entire century so far?

Maybe some stuff has just flown under the radar a bit, and its true impact has yet to be felt. CRISPR has been in testing for a while, and if that lives up to its promise it has the potential to be the most revolutionary change in medicine since… well, medicine. We may look back and see the introduction of ChatGPT and Stable Diffusion as the foetal stage of “true” AI. mRNA vaccines may lead to cures for everything from the common cold to cancer. And maybe these achingly slow steps we’re taking toward a permanent settlement on the Moon will turn out to be the very belated beginning of ours becoming a truly multi-planetary species.

I do think there are some major developments just over the horizon. The recent breakthrough in fusion research has the potential to be absolutely huge in terms of our global energy needs and the fight against climate change. The James Webb telescope has the potential to find Earth 2.0 out there, leading to what I would like to think would be a big change to the way our society thinks about our place in the universe and our role within it, but will probably turn out instead to be just a big species-wide shrug and a swivel of the head back toward whatever vapid thing Harry and Megan have got up to this week.

But everything that I can think of that might be a huge, revolutionary, society-altering change is 10, 20, 30 years away. Everything in the near term still looks like more of the same: remixes, remakes, reboots.

I have to wonder how we got here, and why it feels like nothing much has changed on a fundamental level for the past 15 years. Is it that the low-hanging fruit was indeed all harvested? Did Facebook et al throw a major ADHD spanner into the works? Is it the fact that we’re staggering blindly and half-crazed through late-stage capitalism, where the only thing that truly matters is TEH STONKS? Is it at least partly because the country that drove the majority of revolutionary innovation through the 20th century is now, in almost every way that counts, a shrivelled husk and a shadow of its former self?

All of the above? None of the above? I don’t know. Maybe we haven’t actually just been spinning our wheels in place for the past 15 years, doing donuts across the face of the Earth and flipping the bird to the stars… but it certainly feels like we have to me.