I wanted to explore the idea of taking these maquettes further, to potentially change their scale, produce in a different material, recreate as 3D objects but start thinking about how they could become something new. I was keen to try and scan the items using 3D software, this would give me the objects as a digital version of themselves and I could play with scale and output. I used ‘KiriEngine’ (https://www.kiriengine.app/) an App that you simply use on your phone, taking multiple images around your subject and then rendering in the format you wish. By doing this I wanted to test the way I could speak to the App and how it translated my intentions, I wanted to push the control of the language between human and machine.
The process can be very frustrating. Much is lost in translation, it does however potentially offer ‘happy accidents’ that could be taken further. Below are some of the outcomes and downloaded files.
The 3D digital scan. The software scans the item and then puts images of the actual image on the shape. This on your phone you can rotate through all 360 degrees and is a good representation of the object, when it works.
This is one of the jpg files it produces. Fascinating and reminiscent of the object in a collage way yet very soupy and liquid like. The other files it produces are for 3D printing and taking into other 3D software.
However, it doesn’t always work and I didn’t know why until I researched further and asked for advice. These three images below are examples where it hasn’t recorded the presence the model at all just elements of the background...
Equally the jpg files it outputs are even more abstract. Could they be something in their own right. Should they be something I take forward and either print as they are or use as inspiration to create as a painting? They are examples of when the language has broken down and understanding has been lost, even if the software thinks it has created something I intended. I equate it to ordering a meal in a language I have no understanding of and being served with something I have no idea what it is. Do I eat it still?
The reason why these software is not picking up the object is because I had made a complete white background for the object and used a turntable to keep the camera in one place but rotate the object. But you need to photograph the object on a messy, scratched surface as the software isn’t necessarily looking at the object it is registering the marks in the background and using these as reference points to calculate the object’s position in the space. It is measuring the negative space. Does this mean that the negative space is equally as important as the object or more so?
After several attempts I did eventually get some 3D images that I could start to use and consider what to do with them next. At this point I was quite keen to output as 3D prints and look to create as bronze casts. Before that I had the opportunity to play with the images in another piece of software ‘Blender’. The idea being that I could modify and adjust the 3D images. I did have a play to see what I could do but felt I was adjusting the image and trying to perfect it, which is what I didn’t want to do, I wanted the outcome to be true to the translation, however it may have evolved from the original.
Looking at the 'Blender' sofware.
Potentially in Blender you can create an object of a shape and size and then manipulate it, pulling, stretching, twisting and scrunching to create a piece of digital sculpture, all without actually touching the object with your hands, just using the mouse, the keyboard and the language of the software, which is turn becomes a series of binary codes as instructions. The screen then gives us a visual interpretation of what the image may look like, we understand the screen, we need to see it visually to comprehend it but it doesn’t actually exist as a lump of clay or plaster does. Where is the hand of the artist?
I have several objects that have been 3D imaged and are being printed by the 3D printer. (See page 17). The decision to let them be printed as they had been interpreted is quite deliberate, I want to embrace the translation, the language from machine to machine with little intervention. The process to get the items to a digital 3D state will be easier next academic year as the 3D department will have a 3D scanner. This may eradicate a lot of the issues with the process but it may also offer further opportunities for new developments, more unexpected outcomes.
The intended outcome for these images is to cast them in bronze from the 3D outputs.
Why cast? Why bronze?
The original maquettes I still find quite fascinating. I would never have created them the way I did without the instructions had I just played with the contents of the bag. I would never have considered ‘weaving the wire with the glue’ or ‘wrapping the plastic strips with the foamboard’. It created a series of objects that were unique in their construction, even if they were not aesthetically pleasing, they still spoke of the original material used, I wanted to lose that element and neutralise the object, take it a stage away from the materials used, join it all together, bond the object as one item rather than a collection of random items constructed as one thing. The idea of casting them takes them to another dimension, by casting in metal (I could have used aluminium or other metals) it elevates them from the original, it also mystifies them further, takes them a stage further away, almost like Chinese Whispers the item develops through every stage, every translation. And, as with Chinese Whispers the end product bears a similarity to the original word or phrase but has developed through the constant comprehension of each transition. You often wonder how did a certain word or phrase end up as it did, to what degree had people misheard the word and or even made up a completely new one. On reflection I have purposely let each stage translate the object, with little or no control.
Commentaires