Design Level 01: Form & Function
The look and feel of a design, its physical qualities and characteristics, and the impact of its materials and production on the environment.
Questions:
For example, how might you address sex-specific biomechanics, anthropometrics, physiology, and physical strength? Or how might you address skin tones in facial recognition, for example? Are you considering how strength and dexterity differs for young people and elderly people?
Case Studies:
Key Intersecting Factors: Age, Ethnicity, Gender, Sex
Female drivers are 47% more likely to sustain severe injuries than males in comparable crashes, when controlling for body mass. Crash test dummies have been designed to represent the mid-sized male (5’7”/ 166 lbs); the 5th percentile female dummy (4’9”/ 104 lbs) is simply a scaled-down version of this male norm. These small dummies are often placed in the passenger’s seat, which reinforces gender stereotypes and fails to protect women drivers.
Older people are more likely to experience serious injuries in almost every crash type, and older women are at greatest risk for bone fracture.
Asian populations are on average smaller by height and weight than U.S. adults. Professor Matt Reed at the University of Michigan has developed virtual dummies that adjust by height, weight, and age for both U.S. and Japanese populations.
Key Intersecting Factors: Gender, Sex
Can we move beyond binary notions of gender? Automatic Gender Recognition (AGR) cannot always “see” transgender people, especially during transition periods. Gender-affirming hormone therapy (HT) can redistribute facial fat and change the overall shape of the face. Focusing on the eye (or periocular) region may produce better results since this region is less affected by change than other facial regions. Trans communities, however, may not wish to be subjected to facial recognition technologies.
Google’s Cloud Vision API no longer labels images “man” or “woman,” which reinforces binary gender. It now uses the neutral label: “person.”
Key Intersecting Factors: Race, Sex
Soap dispensers don’t work for people with darker skin. Why is that? The near-infrared technology does not bounce off the hand, close the circuit, and dispense the soap. More seriously, heart-rate monitors, even Fitbits, don’t work for darker skinned people, which may put them at risk of serious conditions like heart disease. And pulse oximeters, used to measure oxygen levels in the blood, overestimate oxygen levels in the blood in patients with darker skin, putting them at risk for organ failure if supplemental oxygen is not provided.
Oximeters can also be inaccurate for women, whose fingers are typically smaller and geometrically different from men’s. Black women may experience the highest error rates.
Key Intersecting Factors: Ethnicity, Geographic Location
Computer vision often mislabels images from different geographic locations and ethnicities. A photograph of a traditional U.S. bride dressed in white (right) is correctly labeled “bride,” “dress,” “woman,” “wedding,” but a photograph of a North Indian bride (left) is incorrectly labeled “performance art,” “red,” “costume.”
Why? In this case, the data is biased. More than 45% of ImageNet data—which fuels computer visions—comes from the U.S., home to only 4% of the world’s population. By contrast, China and India together contribute just 3% of ImageNet data, even though these countries represent 36% of the world’s population. Datasets need to capture appropriate ethnic and geo-diversity.