Positioned along the concave western façade of the new Robotics building, the robot garden serves as a testing ground for mobile robots of various families. The design of the garden is completely based on Generative Adversarial Networks (GAN), using Artificial Intelligence in all steps of the design process. This allows generating a synthetic ecology present somewhere between the natural and the artificial. The resulting digital model serves on the one side as the template for the construction, on the other, it serves the purpose of simulating and preparing the test of robots in the garden. The garden itself is executed in natural material providing various different terrains such as grass, gravel, stone, sand, water, and has topographical features such as waves, inclinations, pits, etc. to emulate difficult terrain. Though the garden is made of natural materials, there is a contrast between the natural landscape features, adjacent to the Robot Garden, and the Robot Garden itself which flirts with an intentional artificiality – operating within the realm of a synthetic ecology, befitting the origin of robots as an artificial progeny.
Artificial Neural networks have become ubiquitous across disciplines due to their high performance in modeling the real world to execute complex tasks in the wild. The project Robot Garden contains a computational design approach that uses the learned, internal representations of deep vision neural networks to invoke stylistic edits in both 2D objects (images) and 3D objects (meshes). The proposed technique un-shelves ideas of style and interrogates this architectural position in the light of 21st century toolsets and production environments. Neural Networks, a branch of the research on Artificial Intelligence, serves as the main driving force in the exploration of novel design trajectories that evoke questions of agency, style and perception in a posthuman architecture design ecology.
The project Robot Garden makes extensive use of the described technique. The provided site was analyzed using a set of satellite images as a basis. The given shape of the site was cut out of the satellite images to create a set of pictures to create 2D to 3D Neural mesh rendering. In an attempt to have a Neural Network dream or hallucinate architectural features on the site, it was trained using an extensive library of images of features such as columns, stairs, fountains, etc. Surprisingly the resulting images represent a novel view of these archaic architectural features. The hybrid nature of the resulting meshes do not show the features in full clarity but are rather the hallucinogenic dream of a machine trying to see these features in the landscape.