You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: app/_projects/a-fathers-lullaby.markdown
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,6 +33,7 @@ team :
33
33
_A Father's Lullaby_ is a multi-platform, community engaged project highlighting the role of men in raising children and their absence due to racial disparities in the criminal justice system.
34
34
35
35
{% include image file='a-fathers-lullaby.jpg'
36
+
alt='A wide shot of Rashin\'s installation with twelve men singing lullabies'
36
37
caption='_A Father\'s Lullaby_ on display at the ICA in Boston' %}
37
38
38
39
The project centers on marginalized voices of absent fathers, while inviting all men to participate by singing lullabies and sharing memories of childhood.
@@ -48,6 +49,7 @@ During the residency Rashin worked with the Emmy-award winning creative studio [
48
49
In the installation, the father describes the experience of returning to his adult children, now in their 20s: "The main question from all of them is, ‘Why did you leave?’"
49
50
50
51
{% include image file='volumetric-portrait.jpg'
52
+
alt='A man singing represented by a human shaped point cloud'
51
53
caption='A still from the volumetric portrait' %}
52
54
53
55
Supporting the in-person exhibition experiences is a website where fathers from across the USA upload recordings of their voices. Website visitors can browse and explore the voices via an interactive map, read about the project's history, and record and upload a lullaby for themselves.
@@ -60,6 +62,7 @@ As such, the project continues to grow and evolve over time as more fathers disc
60
62
Rashin Fahandej says that A Father’s Lullaby is a "Poetic Cyber Movement for Social Justice, in which art and technology mobilizes a plethora of voices in public and virtual spaces to ignite an inclusive dialogue effecting social change."
61
63
62
64
{% include image file='ica-panel-rashin.jpg'
65
+
alt='Rashin speaking in a panel with four others'
63
66
caption='Rashin speaking on a panel at the ICA opening' %}
64
67
65
68
The piece has been awarded the [Prix Ars Electronica Award of Distinction in Digital Music and Sound Art](https://thoughtworksarts.io/blog/rashin-fahandej-award-of-distinction-ars-electronica/), the [James and Audrey Foster prize](https://www.icaboston.org/exhibitions/2019-james-and-audrey-foster-prize), and was [exhibited at the Institute for Contemporary Arts (ICA)](https://thoughtworksarts.io/blog/institute-contemporary-arts-biennal-exhibits-rashin-fahandej-boston/) on the Boston waterfront Aug 21 – Dec 31, 2019.
Copy file name to clipboardExpand all lines: app/_projects/biofields.markdown
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,12 +17,14 @@ team :
17
17
*Biofields* is an open-source framework combining biosensors with volumetric and interval arithmetic simulations to interactively model the feeling of energy around the body.
18
18
19
19
{% include image file='heart-chakra.jpg'
20
+
alt='A digitized human sitting in a VR landscape'
20
21
caption='Conceptual representation for *Biofields* VR experience' %}
21
22
22
23
Aided by Lewey Geselowitz on body simulation and rendering, the project draws from traditional circulatory anatomy, eastern medicine, simulation techniques and biosensor innovations.
23
24
24
25
{% include image file='metrics.jpg'
25
26
class='small'
27
+
alt='A grid showing relational metrics'
26
28
caption='Relational metrics used to define biofields' %}
27
29
28
30
The idea is to support overlapping fields of thermal, photonic, heart-rate, EM frequencies, and various colors to combine and interpolate multiple single-point biosensors into spatial-fields of real-time imagery in VR.
Copy file name to clipboardExpand all lines: app/_projects/birds-of-the-british-empire.markdown
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,6 +42,7 @@ On the surface, the archive seems firmly rooted in the historical past. By contr
42
42
Coupe’s recent work (for example, Warriors, below), uses synthetic media, particularly the algorithmically authored images known as “deepfakes,” to explore the representational dilemmas raised by processes of cultural domination such as gentrification, surrogacy, displacement, and erasure. This work builds upon his earlier explorations of surveillance. Like surveillance media, synthetic media is observational, but also inherently archival, since it originates in datasets that have been curated and organized according to set classifiers, demographics, and statistics—all of which determine value.
43
43
44
44
{% include image file='warriors.jpg'
45
+
alt='An image of people on teh subway woth faces digitally swapped'
45
46
caption='Still from James Coupe, Warriors (2020)' %}
46
47
47
48
As an artist who often authors their work through code, Coupe is interested in unpacking the perceived threat posed by AI and understanding its basis in the curiously para-fictional status of the archive. _Birds of the British Empire_ examines the archival basis of AI by focusing on the hidden role of datasets and training sets – in other words, the images, videos, audio, text that machines learn from. The biases, gaps and blind spots of these datasets are reproduced and amplified when we use AI to generate synthetic media from them—and this process, in turn, demonstrates the determining power of the archive in knowledge production. Synthetic media shapes how we see and understand the world by manipulating archives as a raw material for extraction, classification and optimization. As such, synthetic media should be viewed as an extension of the logic of colonization.
@@ -51,12 +52,14 @@ The relationship between the colonial gaze and earlier representational technolo
51
52
In 1898, W.T. Greene published a book, _Birds of the British Empire_, a taxonomy of birds native to Britain, Australia, India, and parts of North America and Africa. Many of these birds were brought to Britain, and either released into the wild, or kept in aviaries and zoos. This is part of a historical pattern of empire which was justified through the collection, colonization and classification of “others”. The birds in Greene’s book are a dataset that has provided definitions of British identity for more than a century: Browning wrote about the Song Thrush (“Oh, to be in England”). The Budgerigar, native to Aboriginal Australia, became a popular pet in working-class British homes. The Peacock, native to India, remains a common status symbol in British stately homes. These birds are an archive of empire, and cannot be adequately classified through morphology alone. They represent a way of seeing and organizing the world, and are focal points for the colonial gaze.
52
53
53
54
{% include image file='w-t-greene.jpg'
55
+
alt='Cover of the book "Birds of teh britosh Empire"'
54
56
caption='W.T. Greene, Birds of the British Empire (1898)'
55
57
class='no-border' %}
56
58
57
59
Greene’s birds are real, but his classification of them as an archive of “British” birds imposes a series of fictional constructs upon them. The wealth of available literary and patriotic references to this group of birds reinforce a pattern of human conquest, control and domination – over both nature and other people. In this project, Coupe and his collaborators are building a machine learning system that, using a training set of images of Greene’s birds, and other literary, archival and historical references to birds that occur in British colonial archives, synthetically generates images and videos of synthetic birds. The resulting birds, encoded with and making visible the cultural logic of empire, will be exhibited as an aviary of artificial, “exotic” birds, each with a name derived from the terms that were used to generate them. In the resulting installation, grids of video monitors, masquerading as vitrines, each contain a bird moving around and able to respond to the presence of gallery visitors: an archive that returns our gaze. Each vitrine is accompanied by a printed page from a synthetic “field guide”, with AI-authored descriptions of the birds.
58
60
59
61
{% include image file='gan-experiments.jpg'
62
+
alt='A grid shopwing comparitive images of birds'
60
63
caption='Initial GAN experiments' %}
61
64
62
65
Working with machine learning systems allows us to dissolve seemingly fixed cultural hegemonies into new configurations and new possibilities. This project imagines alternative classificatory and evaluative systems, and asks us to interrogate, rather than valorize, our imperial histories. It is an urgent response to the current wave of nostalgic, isolationist politics, with its nationalist mantras of taking back control, closing borders, and making things great again. Synthetic media is characterized by its grassroots, open-source origins, and it is crucial that this particular cultural technology be employed to challenge notions of linear historical and social progress, and to point at potential new paradigms for considering narratives of race, gender, class, exclusion, inequality and empire.
Copy file name to clipboardExpand all lines: app/_projects/liminal.markdown
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,11 +46,13 @@ In collaboration with director Kevin Barry and cinematographer Kira Davies, *lim
46
46
CRR’s executive director Mark Parsons contributed to the video’s production, thematic content, and assisted on-site during the filming.
47
47
48
48
{% include image file='catie-demonstrating-concat-software.jpg'
49
+
alt='Catie dancing in front of a screen with multiple time-lapsed versions of her live performance'
49
50
caption='Catie demonstrating CONCAT software' %}
50
51
51
52
The video also demonstrates the success of CONCAT, software developed during Catie's residency project OUTPUT. The robot's motion that appears in both CONCAT and the *liminal* video was generated by moving the robot's joints individually with a trackpad and automatically using the Rhino software and inverse kinematics. Catie choreographed a sequence lasting several minutes for the robot and highlights of this sequence appear thorughout *liminal*.
52
53
53
54
{% include image file='catie-at-crr.jpg'
55
+
alt='Catie dancing in front of an industrial robot'
54
56
caption='Catie working with a 15 foot industrial robot at The Consortium for Research and Robotics (CRR)' %}
55
57
56
58
The work was recently featured in a Forbes article on [arts, humanities and the future of work](https://www.forbes.com/sites/benjaminwolff/2021/04/06/the-arts-and-humanities-deliver-untapped-value-for-the-future-of-work/) and forms an important part of a Scientific American [article about choreorobotics](https://www.scientificamerican.com/article/dancing-with-robots/) written by Catie.
Copy file name to clipboardExpand all lines: app/_projects/output.markdown
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,20 +39,23 @@ _OUTPUT_ premiered at Triskelion Arts’ Collaborations in Dance Festival in Bro
39
39
The _CONCAT_ system [visualizes a realtime 3D animation](https://github.com/thoughtworksarts/concat) of a 15 foot industrial robot arm, nicknamed _Wen_, moving alongside a live motion capture of a person. This shows the comparison and contrast of a human body next to an automated robot body, akin to a video game matching a live body to its mechanical shadow.
40
40
41
41
{% include image file='consortium.jpg'
42
+
alt='Catie instructing the movement of a large industriual robot'
42
43
caption='Catie working with a 15 foot industrial robot' %}
43
44
44
45
The _MOSAIC_ software used a webcam to generate time-delayed video segments, stacked side by side. The two softwares allow space for improvisation during live performance, creating a link between the human form and the robotic arm.
45
46
46
47
To create this performance, Catie collaborated with Thoughtworks developers and the [Consortium for Research and Robotics](https://consortiumrr.com/), a research center of Pratt Institute in New York.
47
48
48
49
{% include image file='andy-felix.jpg'
50
+
alt='Engineers working in a robotics lab'
49
51
caption='Thoughtworks developers working at the Consortium for Research & Robotics' %}
50
52
51
53
OUTPUT was [featured in the New York Times](https://www.nytimes.com/2020/11/05/arts/dance/dance-and-artificial-intelligence.html), on [PBS NewsHour](/blog/concat-tool-feature-pbs/) in a segment on the future of work, and in an [Engadget video feature and article on robotic choreography](https://www.engadget.com/2018/10/12/robot-choreography-catie-cuan/).
52
54
53
55
The project was demonstrated at the TED Education Weekend in October, 2018, and exhibited at Pioneer Works Second Sunday in April, 2019, and at the Dance USA Conference in June, 2019.
54
56
55
57
{% include image file='concat.jpg'
58
+
alt='A visitor trying out Catie\'s interactive exhibit'
56
59
caption='CONCAT during a public showcase at Pioneer Works' %}
57
60
58
61
An academic paper describing the work and its implications [was published](http://aisb2019.machinemovementlab.net/MTSB2019_Cuan_Pearlman_McWilliams.pdf) in the _Movement that Shapes Behaviour_ symposium at the 2019 Artificial Intelligence and Simulation Behaviour Convention, titled _“OUTPUT: Translating Robot and Human Movers Across Platforms in a Sequentially Improvised Performance”_. The piece is written up in [Frontiers in Robotics and AI](https://www.frontiersin.org/articles/10.3389/frobt.2020.576790/full).
Copy file name to clipboardExpand all lines: app/_projects/perception-io.markdown
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,6 +58,7 @@ Perception iO uses facial expression recognition to analyze reactions as viewers
58
58
The film runs through different scenarios of policing, portraying interactions with both black and white civilians. How the viewer responds to the presented scenarios triggers different outcomes in a branching narrative. The immersive experience explores ways in which a person’s gaze and emotional reactions influence their perception of reality.
59
59
60
60
{% include image file='perception-cooper-hewitt.jpg'
61
+
alt='A woman watching the video experience in a dark room'
61
62
caption='Installation view from the Face Values exhibition' %}
62
63
63
64
Participants are invited to analyze the data calibrated from their interaction with the film, to consider how AI technology registers their behavior, and how human behavior is affected by implicit bias.
@@ -75,11 +76,13 @@ Thoughtworks developers guided the technical development, by increasing the accu
75
76
## Exhibitions and Publications
76
77
77
78
{% include image file='perception-screen.jpg'
79
+
alt='A video still showing a police officer holding a gun and a woman in distress'
78
80
caption='A still shot from Perception iO' %}
79
81
80
82
Perception iO was commissioned for the 2019-2020 exhibition [Face Values: Exploring Artificial Intelligence at The Cooper Hewitt Smithsonian Design Museum](https://www.cooperhewitt.org/events/current-exhibitions/face-values/). The project was also part of the exhibitions [In Kepler's Gardens](https://ars.electronica.art/keplersgardens/en/perception/) at Ars Electronica, and [Expo Starts Prize à Bozar](https://www.rtbf.be/info/medias/detail_expo-starts-prize-a-bozar-art-et-science-s-associent-pour-penser-l-avenir?id=10593690), and Karen and the Thoughtworks team receieved an [honorable mention as part of STARTS Prize 2020](https://starts-prize.aec.at/en/perception/).
81
83
82
84
Perception iO has been reviewed in [Wired](https://www.wired.co.uk/article/karen-palmer-racist-bias), [BioMetric Update](https://www.biometricupdate.com/202003/artist-demonstrates-emotion-detecting-videos-may-help-people-see-their-own-biases), [Immersive Futures](https://immersivefutures.io/perception-io/) and the [Smithsonian Magazine](https://www.smithsonianmag.com/smithsonian-institution/heres-why-ai-cant-be-taken-face-value-180973235/).
83
85
84
86
{% include image file='perception-wired.jpg'
87
+
alt='The cover of the edition of WIRED featuring Karen\'s work'
85
88
caption='Wired article on Karen Palmer creating Perception iO' %}
Copy file name to clipboardExpand all lines: app/_projects/riot.markdown
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,13 +55,15 @@ extended-team :
55
55
*RIOT* is an immersive, emotionally responsive, live-action film that positions viewers in the middle of a riot in progress. The film responds to participants’ emotional expressions using artificial intelligence, altering the video story journey in real-time.
56
56
57
57
{% include image file='riot-1.jpg'
58
+
alt='An audience member standing in front of the installation in a dark room'
58
59
caption='An audience member experiencing the *RIOT* installation' %}
59
60
60
61
The *RIOT* film installation experience allows viewers to consider how they might react in the face of imminent danger. A webcam or camera is used to monitor each viewer’s facial characteristics as they watch the film, and the video narrative responds accordingly.
61
62
62
63
For example, if the viewer appears agitated, the character in the film responds defensively or impatiently. The same is true for a number of other assessed emotional expressions, such as calmness or fear.
63
64
64
65
{% include image file='developers.jpg'
66
+
alt='Engineers discussing the project with Karen in front of a whiteboard'
65
67
caption='Karen working with Thoughtworks developers Sofia Tania and Angelica Perez' %}
66
68
67
69
*RIOT* provides feedback into complex cultural issues surrounding split second decisions in times of actual crisis. It places viewers at the center of a provocative story for which there are a number of potential live time scenarios which may triggered by their authentic reaction.
@@ -72,6 +74,7 @@ During her time at the Thoughtworks Arts Residency, Karen worked with Thoughtwor
72
74
As a result, Karen can add new emotional expressions to the experience, and can add new narrative layers to the *RIOT* prototype via a custom-built user interface.
73
75
74
76
{% include image file='open-studios.jpg'
77
+
alt='People discussing the work in an office setting'
75
78
caption='Karen at one of her regular \'Open Studios\'' %}
76
79
77
80
The *RIOT* experience works with a deep neural net toolkit for emotional expression analysis, created by Thoughtworks, named *EmoPy*. The system has [been made open source](https://github.com/thoughtworksarts/EmoPy) in order to provide free access beyond existing closed-box commercial implementations, both widening access and encouraging debate.
@@ -81,6 +84,7 @@ As of 2019, EmoPy is [featured as a Thoughtworks project on the Open Source home
81
84
Karen's previous work with [Dr. Hongying Meng](https://www.brunel.ac.uk/people/hongying-meng) of Brunel University, London, was refined and developed as part of the piece. This includes his research and implementation of new facial expression analysis techniques.
82
85
83
86
{% include image file='riot-2.jpg'
87
+
alt='A "police officer" keeping guard as visitors try out the RIOT experience'
84
88
caption='*RIOT* exhibited at SPRING/BREAK as part of a Thoughtworks Arts exhibition during Armory Week' %}
85
89
86
90
During her residency, Karen hosted regular 'Open Studios' events at Thoughtworks New York. Stakeholders from across industry, academia and the arts were invited to discuss the implications of AI technology, using the residency and artwork as a focal point for critical discussion.
0 commit comments