Xindi Kang is a researcher and artist working with interactive media. She is interested in
interfacing
between human and technologies through voice and movements. She designs experiences that
inspire
the
users to see and hear themselves in different ways. Her research and professional interest
lies
in
human
computer interaction and user experience design. She is currently working towards her
master's
degree at
the Media Arts and Technology program at the University of California, Santa Barbara.
Aurora
Data Visualization | Processing
Oscilla
Interactive Installation | Allolib + Pd
Luminaria
LED Installation | Arduino
GeoD
Data Visualization | Google Maps JS API
Painting Series
Paintings | Oil on canvas
Invisible Machine
Exhibition | Curatorial Work
Handwriting Coach
Haptics Project | Chai 3D
xindi[at]ucsb[dot]edu
Aurora Data Visualization Developed in Java-based Processing
Aurora is a 3D visualization of the relationship between people’s level of curiousity
for the
aurora borealis and the actual insensity of the aurora borealis in the North Pole. The
visualization demonstrates book-checkout data
from
the Seattle Public Library and the solarwind intensity data from NASA from 2006-2014.
The raw data sets were
processed through SQL and Python Jupyter Notebook and then visualized in a spherical
coordinate system to allow 4 dimensions of information (month, year, dewey class, and
level of interest/ intensity) to exist simutaneously. Users can scroll
through 10 years of data with the GUI element and see the
fluctuation between years animated. They can also turn on and off different
scales such as month, year, and dewey class with key press interaction.
Oscilla
Sound can be visualized in a number of ways. Different forms of representation
are
typically used as analytical tools in the context of scientific inquiry. Oscilla
is
an
audio-visual installation that allows the audience to interact with a waveform
with
their own voice through a microphone, and experience both the acoustic and
visual
results. The audience is encouraged by the visual feedback from the waveform and
the
audio feedback from the ring-modulation filter to produce more interesting
results
with
their voice. With more experimenting, the audience can deduce certain patterns
hidden in
the algorithm of the visual pattern and gain control over them.
// In collaboration with Rodney
Duplessis
// Currently Exhibiting at Museum of Sensory
and
Movement Experiences
Luminaria
This is a LED installation as part of IV Lightworks exhibition. LEDs
installed on
a
bridge in Anisquoio Park, with colors and patters responding to incoming
traffic
(pedestrians, bikes and skateboards) detected by transducers attached
underneath the
bridge. Responses are made programmable by four Arduino units
// In collaboration with: David Aleman, Hsin Hsuan Chen, Chris Hoang,
Intae Hwang,
Xindi
Kang, Lu Liu, Wen Liu, Brenda Morales, Andrew Piepenbrink, PJ Powers,
Rebecca
Prieto,
Annika Tan, Leonardo Vargas, Muhammad Hafiz Wan Rosli, Dan Wang, Carmen
Wen,
Junxiang
Yao
// Exhibited June 2016 - June 2017, Anisq'Oyo Park, Isla Vista, CA
GeoD
GeoD visualizes geographical information (locations and entities with
geocoded
information) contained in topic models. It can be used to analyze
locations
discussed in
the whole corpus underlying a whole model or in a specific topic.
The geocoded information that MetadataGeoD maps is gathered from the
corpus for a
topic
model first through a “wikification” process (using the Illinois
Wikifier; see L.
Ratinov et al., 2011) that confirms the recognition of named
entities by
checking for
correspondence to locations, organizations, etc., for which there are
articles in
Wikipedia, and secondly through collecting latitude/longitude
information for the
data.
(However, not all possible named entities can be recognized and geocoded
as
locations in
this way.)
// In collaboration with Dan
Baciu
and Sihwa Park
// Developed for the WE1S
Project
included in the Topic
Model Observatory