Skip to content

Commit 5c75b75

Browse files
committed
Adds alt tags to project writeups
1 parent b8cb2c4 commit 5c75b75

12 files changed

+35
-2
lines changed

app/_projects/a-fathers-lullaby.markdown

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ team :
3333
_A Father's Lullaby_ is a multi-platform, community engaged project highlighting the role of men in raising children and their absence due to racial disparities in the criminal justice system.
3434

3535
{% include image file='a-fathers-lullaby.jpg'
36+
alt='A wide shot of Rashin\'s installation with twelve men singing lullabies'
3637
caption='_A Father\'s Lullaby_ on display at the ICA in Boston' %}
3738

3839
The project centers on marginalized voices of absent fathers, while inviting all men to participate by singing lullabies and sharing memories of childhood.
@@ -48,6 +49,7 @@ During the residency Rashin worked with the Emmy-award winning creative studio [
4849
In the installation, the father describes the experience of returning to his adult children, now in their 20s: "The main question from all of them is, ‘Why did you leave?’"
4950

5051
{% include image file='volumetric-portrait.jpg'
52+
alt='A man singing represented by a human shaped point cloud'
5153
caption='A still from the volumetric portrait' %}
5254

5355
Supporting the in-person exhibition experiences is a website where fathers from across the USA upload recordings of their voices. Website visitors can browse and explore the voices via an interactive map, read about the project's history, and record and upload a lullaby for themselves.
@@ -60,6 +62,7 @@ As such, the project continues to grow and evolve over time as more fathers disc
6062
Rashin Fahandej says that A Father’s Lullaby is a "Poetic Cyber Movement for Social Justice, in which art and technology mobilizes a plethora of voices in public and virtual spaces to ignite an inclusive dialogue effecting social change."
6163

6264
{% include image file='ica-panel-rashin.jpg'
65+
alt='Rashin speaking in a panel with four others'
6366
caption='Rashin speaking on a panel at the ICA opening' %}
6467

6568
The piece has been awarded the [Prix Ars Electronica Award of Distinction in Digital Music and Sound Art](https://thoughtworksarts.io/blog/rashin-fahandej-award-of-distinction-ars-electronica/), the [James and Audrey Foster prize](https://www.icaboston.org/exhibitions/2019-james-and-audrey-foster-prize), and was [exhibited at the Institute for Contemporary Arts (ICA)](https://thoughtworksarts.io/blog/institute-contemporary-arts-biennal-exhibits-rashin-fahandej-boston/) on the Boston waterfront Aug 21 – Dec 31, 2019.

app/_projects/biofields.markdown

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,12 +17,14 @@ team :
1717
*Biofields* is an open-source framework combining biosensors with volumetric and interval arithmetic simulations to interactively model the feeling of energy around the body.
1818

1919
{% include image file='heart-chakra.jpg'
20+
alt='A digitized human sitting in a VR landscape'
2021
caption='Conceptual representation for *Biofields* VR experience' %}
2122

2223
Aided by Lewey Geselowitz on body simulation and rendering, the project draws from traditional circulatory anatomy, eastern medicine, simulation techniques and biosensor innovations.
2324

2425
{% include image file='metrics.jpg'
2526
class='small'
27+
alt='A grid showing relational metrics'
2628
caption='Relational metrics used to define biofields' %}
2729

2830
The idea is to support overlapping fields of thermal, photonic, heart-rate, EM frequencies, and various colors to combine and interpolate multiple single-point biosensors into spatial-fields of real-time imagery in VR.

app/_projects/birds-of-the-british-empire.markdown

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,7 @@ On the surface, the archive seems firmly rooted in the historical past. By contr
4242
Coupe’s recent work (for example, Warriors, below), uses synthetic media, particularly the algorithmically authored images known as “deepfakes,” to explore the representational dilemmas raised by processes of cultural domination such as gentrification, surrogacy, displacement, and erasure. This work builds upon his earlier explorations of surveillance. Like surveillance media, synthetic media is observational, but also inherently archival, since it originates in datasets that have been curated and organized according to set classifiers, demographics, and statistics—all of which determine value.
4343

4444
{% include image file='warriors.jpg'
45+
alt='An image of people on teh subway woth faces digitally swapped'
4546
caption='Still from James Coupe, Warriors (2020)' %}
4647

4748
As an artist who often authors their work through code, Coupe is interested in unpacking the perceived threat posed by AI and understanding its basis in the curiously para-fictional status of the archive. _Birds of the British Empire_ examines the archival basis of AI by focusing on the hidden role of datasets and training sets – in other words, the images, videos, audio, text that machines learn from. The biases, gaps and blind spots of these datasets are reproduced and amplified when we use AI to generate synthetic media from them—and this process, in turn, demonstrates the determining power of the archive in knowledge production. Synthetic media shapes how we see and understand the world by manipulating archives as a raw material for extraction, classification and optimization. As such, synthetic media should be viewed as an extension of the logic of colonization.
@@ -51,12 +52,14 @@ The relationship between the colonial gaze and earlier representational technolo
5152
In 1898, W.T. Greene published a book, _Birds of the British Empire_, a taxonomy of birds native to Britain, Australia, India, and parts of North America and Africa. Many of these birds were brought to Britain, and either released into the wild, or kept in aviaries and zoos. This is part of a historical pattern of empire which was justified through the collection, colonization and classification of “others”. The birds in Greene’s book are a dataset that has provided definitions of British identity for more than a century: Browning wrote about the Song Thrush (“Oh, to be in England”). The Budgerigar, native to Aboriginal Australia, became a popular pet in working-class British homes. The Peacock, native to India, remains a common status symbol in British stately homes. These birds are an archive of empire, and cannot be adequately classified through morphology alone. They represent a way of seeing and organizing the world, and are focal points for the colonial gaze.
5253

5354
{% include image file='w-t-greene.jpg'
55+
alt='Cover of the book "Birds of teh britosh Empire"'
5456
caption='W.T. Greene, Birds of the British Empire (1898)'
5557
class='no-border' %}
5658

5759
Greene’s birds are real, but his classification of them as an archive of “British” birds imposes a series of fictional constructs upon them. The wealth of available literary and patriotic references to this group of birds reinforce a pattern of human conquest, control and domination – over both nature and other people. In this project, Coupe and his collaborators are building a machine learning system that, using a training set of images of Greene’s birds, and other literary, archival and historical references to birds that occur in British colonial archives, synthetically generates images and videos of synthetic birds. The resulting birds, encoded with and making visible the cultural logic of empire, will be exhibited as an aviary of artificial, “exotic” birds, each with a name derived from the terms that were used to generate them. In the resulting installation, grids of video monitors, masquerading as vitrines, each contain a bird moving around and able to respond to the presence of gallery visitors: an archive that returns our gaze. Each vitrine is accompanied by a printed page from a synthetic “field guide”, with AI-authored descriptions of the birds.
5860

5961
{% include image file='gan-experiments.jpg'
62+
alt='A grid shopwing comparitive images of birds'
6063
caption='Initial GAN experiments' %}
6164

6265
Working with machine learning systems allows us to dissolve seemingly fixed cultural hegemonies into new configurations and new possibilities. This project imagines alternative classificatory and evaluative systems, and asks us to interrogate, rather than valorize, our imperial histories. It is an urgent response to the current wave of nostalgic, isolationist politics, with its nationalist mantras of taking back control, closing borders, and making things great again. Synthetic media is characterized by its grassroots, open-source origins, and it is crucial that this particular cultural technology be employed to challenge notions of linear historical and social progress, and to point at potential new paradigms for considering narratives of race, gender, class, exclusion, inequality and empire.

app/_projects/liminal.markdown

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,11 +46,13 @@ In collaboration with director Kevin Barry and cinematographer Kira Davies, *lim
4646
CRR’s executive director Mark Parsons contributed to the video’s production, thematic content, and assisted on-site during the filming.
4747

4848
{% include image file='catie-demonstrating-concat-software.jpg'
49+
alt='Catie dancing in front of a screen with multiple time-lapsed versions of her live performance'
4950
caption='Catie demonstrating CONCAT software' %}
5051

5152
The video also demonstrates the success of CONCAT, software developed during Catie's residency project OUTPUT. The robot's motion that appears in both CONCAT and the *liminal* video was generated by moving the robot's joints individually with a trackpad and automatically using the Rhino software and inverse kinematics. Catie choreographed a sequence lasting several minutes for the robot and highlights of this sequence appear thorughout *liminal*.
5253

5354
{% include image file='catie-at-crr.jpg'
55+
alt='Catie dancing in front of an industrial robot'
5456
caption='Catie working with a 15 foot industrial robot at The Consortium for Research and Robotics (CRR)' %}
5557

5658
The work was recently featured in a Forbes article on [arts, humanities and the future of work](https://www.forbes.com/sites/benjaminwolff/2021/04/06/the-arts-and-humanities-deliver-untapped-value-for-the-future-of-work/) and forms an important part of a Scientific American [article about choreorobotics](https://www.scientificamerican.com/article/dancing-with-robots/) written by Catie.

app/_projects/output.markdown

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,20 +39,23 @@ _OUTPUT_ premiered at Triskelion Arts’ Collaborations in Dance Festival in Bro
3939
The _CONCAT_ system [visualizes a realtime 3D animation](https://github.com/thoughtworksarts/concat) of a 15 foot industrial robot arm, nicknamed _Wen_, moving alongside a live motion capture of a person. This shows the comparison and contrast of a human body next to an automated robot body, akin to a video game matching a live body to its mechanical shadow.
4040

4141
{% include image file='consortium.jpg'
42+
alt='Catie instructing the movement of a large industriual robot'
4243
caption='Catie working with a 15 foot industrial robot' %}
4344

4445
The _MOSAIC_ software used a webcam to generate time-delayed video segments, stacked side by side. The two softwares allow space for improvisation during live performance, creating a link between the human form and the robotic arm.
4546

4647
To create this performance, Catie collaborated with Thoughtworks developers and the [Consortium for Research and Robotics](https://consortiumrr.com/), a research center of Pratt Institute in New York.
4748

4849
{% include image file='andy-felix.jpg'
50+
alt='Engineers working in a robotics lab'
4951
caption='Thoughtworks developers working at the Consortium for Research & Robotics' %}
5052

5153
OUTPUT was [featured in the New York Times](https://www.nytimes.com/2020/11/05/arts/dance/dance-and-artificial-intelligence.html), on [PBS NewsHour](/blog/concat-tool-feature-pbs/) in a segment on the future of work, and in an [Engadget video feature and article on robotic choreography](https://www.engadget.com/2018/10/12/robot-choreography-catie-cuan/).
5254

5355
The project was demonstrated at the TED Education Weekend in October, 2018, and exhibited at Pioneer Works Second Sunday in April, 2019, and at the Dance USA Conference in June, 2019.
5456

5557
{% include image file='concat.jpg'
58+
alt='A visitor trying out Catie\'s interactive exhibit'
5659
caption='CONCAT during a public showcase at Pioneer Works' %}
5760

5861
An academic paper describing the work and its implications [was published](http://aisb2019.machinemovementlab.net/MTSB2019_Cuan_Pearlman_McWilliams.pdf) in the _Movement that Shapes Behaviour_ symposium at the 2019 Artificial Intelligence and Simulation Behaviour Convention, titled _“OUTPUT: Translating Robot and Human Movers Across Platforms in a Sequentially Improvised Performance”_. The piece is written up in [Frontiers in Robotics and AI](https://www.frontiersin.org/articles/10.3389/frobt.2020.576790/full).

app/_projects/perception-io.markdown

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,7 @@ Perception iO uses facial expression recognition to analyze reactions as viewers
5858
The film runs through different scenarios of policing, portraying interactions with both black and white civilians. How the viewer responds to the presented scenarios triggers different outcomes in a branching narrative. The immersive experience explores ways in which a person’s gaze and emotional reactions influence their perception of reality.
5959

6060
{% include image file='perception-cooper-hewitt.jpg'
61+
alt='A woman watching the video experience in a dark room'
6162
caption='Installation view from the Face Values exhibition' %}
6263

6364
Participants are invited to analyze the data calibrated from their interaction with the film, to consider how AI technology registers their behavior, and how human behavior is affected by implicit bias.
@@ -75,11 +76,13 @@ Thoughtworks developers guided the technical development, by increasing the accu
7576
## Exhibitions and Publications
7677

7778
{% include image file='perception-screen.jpg'
79+
alt='A video still showing a police officer holding a gun and a woman in distress'
7880
caption='A still shot from Perception iO' %}
7981

8082
Perception iO was commissioned for the 2019-2020 exhibition [Face Values: Exploring Artificial Intelligence at The Cooper Hewitt Smithsonian Design Museum](https://www.cooperhewitt.org/events/current-exhibitions/face-values/). The project was also part of the exhibitions [In Kepler's Gardens](https://ars.electronica.art/keplersgardens/en/perception/) at Ars Electronica, and [Expo Starts Prize à Bozar](https://www.rtbf.be/info/medias/detail_expo-starts-prize-a-bozar-art-et-science-s-associent-pour-penser-l-avenir?id=10593690), and Karen and the Thoughtworks team receieved an [honorable mention as part of STARTS Prize 2020](https://starts-prize.aec.at/en/perception/).
8183

8284
Perception iO has been reviewed in [Wired](https://www.wired.co.uk/article/karen-palmer-racist-bias), [BioMetric Update](https://www.biometricupdate.com/202003/artist-demonstrates-emotion-detecting-videos-may-help-people-see-their-own-biases), [Immersive Futures](https://immersivefutures.io/perception-io/) and the [Smithsonian Magazine](https://www.smithsonianmag.com/smithsonian-institution/heres-why-ai-cant-be-taken-face-value-180973235/).
8385

8486
{% include image file='perception-wired.jpg'
87+
alt='The cover of the edition of WIRED featuring Karen\'s work'
8588
caption='Wired article on Karen Palmer creating Perception iO' %}

app/_projects/riot.markdown

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -55,13 +55,15 @@ extended-team :
5555
*RIOT* is an immersive, emotionally responsive, live-action film that positions viewers in the middle of a riot in progress. The film responds to participants’ emotional expressions using artificial intelligence, altering the video story journey in real-time.
5656

5757
{% include image file='riot-1.jpg'
58+
alt='An audience member standing in front of the installation in a dark room'
5859
caption='An audience member experiencing the *RIOT* installation' %}
5960

6061
The *RIOT* film installation experience allows viewers to consider how they might react in the face of imminent danger. A webcam or camera is used to monitor each viewer’s facial characteristics as they watch the film, and the video narrative responds accordingly.
6162

6263
For example, if the viewer appears agitated, the character in the film responds defensively or impatiently. The same is true for a number of other assessed emotional expressions, such as calmness or fear.
6364

6465
{% include image file='developers.jpg'
66+
alt='Engineers discussing the project with Karen in front of a whiteboard'
6567
caption='Karen working with Thoughtworks developers Sofia Tania and Angelica Perez' %}
6668

6769
*RIOT* provides feedback into complex cultural issues surrounding split second decisions in times of actual crisis. It places viewers at the center of a provocative story for which there are a number of potential live time scenarios which may triggered by their authentic reaction.
@@ -72,6 +74,7 @@ During her time at the Thoughtworks Arts Residency, Karen worked with Thoughtwor
7274
As a result, Karen can add new emotional expressions to the experience, and can add new narrative layers to the *RIOT* prototype via a custom-built user interface.
7375

7476
{% include image file='open-studios.jpg'
77+
alt='People discussing the work in an office setting'
7578
caption='Karen at one of her regular \'Open Studios\'' %}
7679

7780
The *RIOT* experience works with a deep neural net toolkit for emotional expression analysis, created by Thoughtworks, named *EmoPy*. The system has [been made open source](https://github.com/thoughtworksarts/EmoPy) in order to provide free access beyond existing closed-box commercial implementations, both widening access and encouraging debate.
@@ -81,6 +84,7 @@ As of 2019, EmoPy is [featured as a Thoughtworks project on the Open Source home
8184
Karen's previous work with [Dr. Hongying Meng](https://www.brunel.ac.uk/people/hongying-meng) of Brunel University, London, was refined and developed as part of the piece. This includes his research and implementation of new facial expression analysis techniques.
8285

8386
{% include image file='riot-2.jpg'
87+
alt='A "police officer" keeping guard as visitors try out the RIOT experience'
8488
caption='*RIOT* exhibited at SPRING/BREAK as part of a Thoughtworks Arts exhibition during Armory Week' %}
8589

8690
During her residency, Karen hosted regular 'Open Studios' events at Thoughtworks New York. Stakeholders from across industry, academia and the arts were invited to discuss the implications of AI technology, using the residency and artwork as a focal point for critical discussion.

0 commit comments

Comments
 (0)