SIGGRAPH 2019 to Debut Research Advances From 31 Countries

Technical, Art Papers Programs to Present a Combined 157 Projects

CHICAGO–(BUSINESS WIRE)–lt;a href=”https://twitter.com/hashtag/ACMSIGGRAPH?src=hash” target=”_blank”gt;#ACMSIGGRAPHlt;/agt;–Known for pushing the boundaries of computer science, SIGGRAPH
2019
announces its Technical
Papers
and Art
Papers
research programming. SIGGRAPH 2019 will run 28 July–1 August
in downtown Los Angeles. Known in its 46-year history to deliver
cutting-edge, global research, this year’s innovations are sure inspire
the computer science community.


“Each year, the Technical Papers program sets the pace for what’s next
in visual computing and the adjacent subfields of computer science. I am
excited to be part of presenting the amazing work of researchers who
drive the industry and look forward to how this work ignites memorable
discussions,” said SIGGRAPH 2019 Technical Papers Chair Olga
Sorkine-Hornung. “This is the kind of content you’ll reflect on, and
refer to, throughout the year to come.”

Along with new research from various academic labs, Facebook Reality
Labs, NVIDIA, and Disney Research, highlights from the 2019 Technical
Papers program include:

Semantic Photo Manipulation With a Generative Image Prior
Authors:
David Bau, Massachusetts Institute of Technology and MIT-IBM Watson AI
Lab; Hendrik Strobelt, IBM Research and MIT-IBM Watson AI Lab; William
Peebles, Jonas Wulff, Jun-Yan Zhu, and Antonio Torralba, Massachusetts
Institute of Technology; and, Bolei Zhou, The Chinese University of Hong
Kong

We use GANs to make semantic edits on a user’s image. Our
method maintains fidelity to the original image while allowing the user
to manipulate the semantics of the image.

MeshCNN: A Network With an Edge
Authors: Rana Hanocka,
Amir Hertz, Noa Fish, Raja Giryes, and Daniel Cohen-Or, Tel Aviv
University; and, Shachar Fleishman, Amazon

MeshCNN is a deep
neural network for triangular meshes, which applies convolution and
pooling layers directly on the mesh edges. MeshCNN learns to exploit the
irregular and unique mesh properties.

Text-Based Editing of Talking-Head Video
Authors: Ohad
Fried, Michael Zollhöfer, and Maneesh Agrawala, a Stanford University;
Ayush Tewari and Christian Theobalt, Max Planck Institute for
Informatics; Adam Finkelstein and Kyle Genova, Princeton University; Eli
Shechtman and Zeyu Jin, Adobe; and, Dan B. Goldman, Google

Text-based
editing of talking-head video supports adding, removing, and modifying
words in the transcript, and automatically produces video with lip
synchronization that matches the modified script.

SurfaceBrush: From Virtual Reality Drawings to Manifold Surfaces
Authors:
Enrique Rosales, University of British Columbia and Universidad
Panamericana; Jafet Rodriguez, Universidad Panamericana; and, Alla
Sheffer, University of British Columbia

VR tools enable users
to depict 3D shapes using virtual brush strokes. SurfaceBrush converts
such VR drawings into user-intended manifold 3D surfaces, providing a
novel approach for modeling 3D shapes.

Puppet Master: Robotic Animation of Marionettes
Authors:
Simon Zimmermann, James Bern, and Stelian Coros, ETH Zurich; and, Roi
Poranne, ETH Zurich and University of Haifa

We present a
computational framework for robotic animation of real-world string
puppets, based on a predictive control model that accounts for the
puppet dynamics the kinematics of the robot puppeteer.

For even more highlights, check out the Technical Papers Preview on
YouTube: https://youtu.be/EhDr3Rs5fTU.

In addition, the Art Papers program offers a platform to explore and
interrogate research that focuses, specifically, on scientific and
technological applications in art, design, and humanities. Highlights
for 2019 include:

CAVE: Making Collective Virtual Narrative
Authors: Kris
Layng, Ken Perlin, and Sebastian Herscher, New York University / Courant
and Parallux; Corrine Brenner, New York University; and, Thomas Meduri,
New York University/ Courant and VRNOVO

CAVE is a shared
narrative virtual reality experience. Thirty participants at a time each
saw and heard the same narrative from their own unique location in the
room, as they would when attending live theater. CAVE set out to
disruptively change how audiences collectively experience immersive art
and entertainment.

Learning to See: You Are What You See
Authors: Memo
Akten and Rebecca Fiebrink, Goldsmiths, University of London; and, Mick
Grierson, University of the Arts, London

“Learning to See”
utilizes a novel method in “performing” visual, animated content — with
an almost photographic visual style — using deep learning. It
demonstrates both the collaborative potential of AI, as well as the
inherent biases reflected and amplified in artificial neural networks,
and perhaps even our own neural networks.

To discover more highlights, check out the Art Papers Preview on
YouTube: https://youtu.be/6uhyhW58A2M.

Technical Papers programming is open to participants at the Full
Conference Platinum and Full Conference registration levels only. Art
Papers programming is open to the Experiences level and above. Learn
more about SIGGRAPH 2019 and register here: s2019.SIGGRAPH.org/register.

About ACM, ACM SIGGRAPH and SIGGRAPH 2019

ACM,
the Association for Computing Machinery, is the world’s largest
educational and scientific computing society, uniting educators,
researchers, and professionals to inspire dialogue, share resources, and
address the field’s challenges. ACM
SIGGRAPH
 is a special interest group within ACM that serves as an
interdisciplinary community where researchers, artists, and
technologists collide to progress applications in computer graphics and
interactive techniques. The SIGGRAPH conference is the world’s leading
annual interdisciplinary educational experience for inspiring
transformative advancements across the disciplines of computer graphics
and interactive techniques. SIGGRAPH
2019
, the 46th annual conference hosted by ACM SIGGRAPH, will take
place from 28 July–1 August at the Los Angeles Convention Center.

Contacts

Media Contact:
Emily Drake
Media Relations Manager
+
1.312.673.4758
emily_drake@SIGGRAPH.org

error: Content is protected !!