Share this
Costas Kazantzis focuses on identifying novel ways through which game engine technology, 3D design, and XR can reshape the way fashion and art content is disseminated. His work lies at the intersection between fashion media production, visual communication, and computer science. Through his deep understanding of immersive technologies and experience working across collaborative digital fashion projects, Costas provides insight into the delivery and development of these projects from conception to realisation.
Hi Costas. Thank you for sitting down with me and enlightening me on this subject. First of all you’re an expert in various different sectors. And you literally bridge the gap between fashion and technology for a living. Tell the readers just what exactly it is that you do. (All genres / jobs please) 🙂
Hey hey! Thanks for inviting me 🙃 My practice focuses on identifying new ways of disseminating fashion within hybrid worlds. I’m very interested in using technology and emerging media to blend the physical and digital realms and explore immersive storytelling. My projects could include many different outputs like a digital fashion catwalk, an online 3D space to explore the vision behind a collection, an AR template to augment your physical garments with digital effects, and a purely digital virtual reality experience, among others.
In that context, I work a lot with game engine software which are tools that were initially introduced to create video games; however, they have been gradually adopted by several other industries and art disciplines. These toolsets establish the infrastructure through which every metaverse experience will be developed.
Through our existence and interaction within digital platforms, we can express ourselves in ways that go beyond the physical and therefore play with layers of abstraction that weren’t possible to achieve before. The body becomes more like a starting point than a territory, and to me, that has a significant interest in fashion design and presentation.
At the moment, together with developing my practice, I am the Lead Creative Technologist at the London College of Fashion’s Fashion Innovation Agency, where we have an exciting remit to demonstrate applications of cutting-edge technology within the realms of fashion and retail. My job includes experimentation with new technologies, bringing together fashion brands and technology partners to develop proof of concept collaborations, consultancy, public speaking, and lecturing about new media at the university.
And can you break down in laymans terms what Augmented Reality means, and what it’s purpose is.
Augmented Reality (AR) can be described as the infrastructure which enables us to augment physical world surroundings with digital-immaterial layers. In that sense, we can imagine the real world around us becoming a template on top of which we could build different kinds of virtual experiences.
Something exciting to me is that we can program those moments to be interactive and multiplayer. So it’s not just about building 3D worlds but also making those reactive to user inputs – I’m excited to see how AR could be the starting point to developing an operating system that goes beyond the screen as a dimension and exists together in sync with our surroundings.
With many advancements in AI, this is becoming even easier, as our machines and software can now recognise physical world objects and help create experiences that are responsive, seamless and specific to where we physically are at any moment.
So game engine software and AR tools. How long has it taken you to get as good as you are now? And for a total newb like me, just how advanced are these programs?
That’s a question I am often asked, primarily because of my involvement with an academic institution.
The plethora of tools and software available is broad. Game engines (such as Unreal and Unity) are remarkable pieces of software, mainly because they form an ecosystem within which any interactive 3D experience can be created.
When I started working with them, there was a time when I felt a bit lost due to the magnitude of things that can be developed through them. I realised that the most important thing was to actually think of the stories that I desired to disseminate, the vision behind them as well as the ways through which I wanted my audience to interact with them. Once I reached that point, identifying the right tutorials and workshops became a much simpler process.
I find it fascinating that so many resources are available online for free – I’ve spent countless hours on YouTube, GitHub and other platforms that provide these kinds of resources. Many of the digital artists I interact with, including myself, are entirely self-taught, and we should fight to maintain this open-source system of shared knowledge and insight.
Also, another essential point to mention here is that technology companies are releasing software for developing metaverse experiences that do not require advanced understanding and knowledge of coding. In that sense, any creator can download those and start playing around. Regarding AR, software like Snapchat’s Lens Studio have paved the way to democratise AR creation, as they provide fully customisable templates tailored to the needs and skill sets of anyone interested in creating AR experiences.
With you being an expert on these topics and seing things from the inside. When did you begin to see the the Metaverse and NFTs rise in popularity. Was it something you expected and did it happen simultaneously? Talk to me a little bit about your experience.
I might have mentioned this in one of your other questions already, but several technologies that are used to develop metaverse experiences are not always new. What’s changing is the accessibility and convenience in using them (e.g. using just a smartphone device for 3D scanning, for example) as well as the ability to stream those experiences in real time (so that people can participate in them and not experience them as passive viewers). For example, the film and VFX industries are pretty advanced in terms of the 3D pipelines they’ve been using for years.
When it comes to fashion, I’d say that it’s been advancing at a slower acceleration pace. For me, Covid was when fashion brands realised the potential of committing time and resources in the metaverse. It was a moment during which physical shows couldn’t be realised; however, seeing it now post Covid it seems like brands have understood the value and impact that those technologies can have when merged with traditional practices.
Virtual production, for example, allows for immersing real-world models into purely digital sets and shooting imagery and moving image content at an exceptional quality. This could completely revolutionise the way fashion film and campaign imagery is created. So, I guess we’ll be seeing more in the future!
And speaking of seeing this from the inside. How is the NFT / meta community like?
I’m very interested in exploring community and the idea of the “safe space” that can be nurtured through metaverse technologies. Hopefully, many of the tools are still open source and can empower independent creators and artists to design those spaces and make them inclusive and accessible.
In terms of NFTs, it comes back to your other question about what has the most prolonged longevity. NFTs are cryptographic assets at the moment popular between blockchain-savvy groups of people – the metaverse, on the contrary, can encompass such a wider number of diverse communities and will exist irregardless if NFT’s are built in its infrastructure or not.
Tech and fashion have been linked closely together for quite a while. First it was through outdoors wear, where we saw various brands implement different kinds of tech in their garments like PVC coated PES fabrics and self-lacing systems. But AR takes it a 10 steps further. Why do you think these two genres go so well together?
These elements of interaction between fashion and technology greatly inspire my work. I was always drawn into cross-disciplinary ways of working together. Even though there were moments throughout my career when I felt lost (my background includes computer engineering, biomedical research, photography, and fashion), I have realised that these times were significant in terms of reaching closer to who I truly am and what I represent creatively.
Fashion is much more about storytelling than the actual product; the item itself is stripped of meaning when it loses its ability to trigger emotional responses. Technology is providing new ways of communicating with one another as well as an infrastructure to express ourselves in multiple layers. It’s providing us with new means of disseminating stories, and there I can see how important the interaction between the two fields is. Through AR, we will not only be able to wear digital fashion in everyday life scenarios but also customise and interact with our already existing physical pieces.
At the FIA, I was recently working on a project that explores ways of digitising physical outfits to be eventually worn and manipulated digitally. Through this, product life cycles can be increased (with a positive impact on overproduction), but also the designer-wearer relationship is shifting to new creative extents since the audience now can have the opportunity to play around with their garments and customise elements on them through digital augmentations.
The above is one example of the diversity of things that can be achieved when working between fashion and tech and how we can aim to bring positive impact in an industry that ever-growingly requires change.
With that in mind, where do you see AR technology in let’s say 10 years. Specificaly with fashion but also in general. Will it be a normal thing, implemented in our everyday lives. Or what do you think?
That’s a tricky one, haha! – mainly because the software is advancing much more rapidly than the hardware.
What’s interesting now is that we have many different toolsets and software to design experiences for AR and the metaverse, and the use of the aforementioned is becoming increasingly democratised. A younger generation of artists is leading the way in 3D creation. However, we still lack that one device that will make the experience of this content more widely distributed. Headsets have advanced in size and ease of use; however, seeing the current landscape (Apple postponing their AR glasses release indefinitely, Microsoft reducing their staffing around mixed reality), its not clear if mass adoption of headsets can be expected soon.
When you say “we’re missing the one device that will make the experience of this content more widely distributed”. What do you mean by that?
Tech companies have been integrating augmented reality into their devices and software for a while. At the moment, the hardware that we have available to experience this content is mainly smartphones, which provides challenges in terms of the level of realism that can distributed as well as how seamless those AR moments are (as far as their integration with the real world is concerned).Therefore, the device I am referring to is firstly a matter of higher processing capabilities blended with a convenient user experience. Glasses make more sense in terms of providing realistic physical-virtual world interactions. Still, from a technical standpoint, we need significant time to reach that stage when they can replace our smartphones; its more likely that in the beginning, they will work together, as we will need our smartphones as processing volumes; in a similar manner to how the Apple Watch works in a way.
We shouldn’t forget, though, that there are many different ways to experience the blending of the physical and digital worlds apart from headsets. The way we experience the web is isnpired by 3D interactions and gradually shifts away from 2D endless scrolling. Large-scale screen installations are released into cities worldwide, acting as canvases for virtual graphics. For a while, we have used our phones to create multilayered identities (through filters for example) or to try on fashion accessories and to some extent full outfits.
Our digital interactions are becoming increasingly 3D, and interfaces are more convenient to use across generations.
Coming back to your question, therefore, I think that AR will be an integral part of what our future digital experiences look like. Advancements in real-time streaming will enable us to share photorealistic graphics across locations, eventually making it hard to distinguish what’s real and what’s not. Either through viewing this content through a screen or glasses.
And why is Apple postponing the release of AR classes indefinitely?
Considering Apple’s attention to user interface and adaptability, it’s a matter of technical challenges in designing a consumer-friendly, lightweight AR product that can be easily adopted.Its being rumoured though, that the company will release a lower-cost mixed-reality headset (combining elements of both augmented and virtual reality) in early 2025.
Somebody once told me that big tech companies are actually fare more advanced than the products they release. Meaning that that they deliberately release “old technology features”, to stretch out the products time on the market for more years, getting more money out of the costumer – before actually releasing their newest technology. What do you think, is there any truth to that?
Tech companies dedicate significant time and resources to R&D. Due to the nature of our work at the FIA (looking into cutting-edge technologies that are 3-5 years from implementation), we often come across these R&D teams and are introduced to the newest updates. Often, these products are far from being commercialised, so its a matter of adopting the right strategy to identify the target audience and how ready the market is for such a release. Having said this, what you describe above happens because there is a long journey between research and experimentation to designing a fully functional product that can be easily commercialised.
Another one of your specialties is creating worlds in the metaverse for brands, companies and artists. Walk me through the process of creating these 3D worlds. From begining to end result.
3D environment creation is a big passion of mine – playing around with the materiality, texturing, proportions, and animation of stuff in a 3D sense, you can unlock endless ways of creative expression. It’s not about replacing the physical world but enhancing it and designing inspiring visual moments.
I create most of my environments within game engines. Depending on the project and the collaborator I am working with, I will need to acquire different kinds of files like motion capture (extracting animation data from a real life performer to animate digital entities), photogrammetry (scanning real world objects and turning them into 3D models), volumetric capture (recording video and stitching it into an animated 3D representation), digital design (designing fashion digitally), amongst others.
These assets will then be incorporated into a game engine (Unity or Unreal), placed accordingly together with the 3D landscape that will host them, effects will be added on top (e.g. particle systems, VFX, atmospherics etc.), and then the audience’s interaction will be programmed on top.
The process is similar to exhibition-making from a physical standpoint, with the slight difference that the curator’s work is all done within the game engine.
What is your opinion on the future of NFTs and the Metaverse? And which do you think have the longest longevity?
The metaverse is a broad term that encloses various technologies and creative processes. These technologies continue to pave the way to seamless interaction between the physical and digital realms. This said, any industry interested in storytelling and addressing wider audiences in more meaningful ways will more or less have to realise projects using 3D pipelines.
Therefore, in my opinion, metaverse has the longest longevity. It can and will exist either with or without NFTs built into its infrastructure.
Ok so when you’re not in working in fashion you’ve also been doing stuff around contemporary art as well. Mainly on the curating and experience making side of things. Talk to me a little bit about that.
Yeah, definitely – I have been working a lot around exhibition-making within metaverse ecologies.
I started doing this mainly during Covid when we couldn’t host physical events, but I can see lots of interest even post-covid. I think one of the reasons for this is that the accessibility of 3D tools allows us to create safer, more inclusive, decentralised online communities, which is extremely important at a time when mainstream social media and censorship policies are excluding increasingly more voices.
The process again involves game engines and 3D design, modelling and animation software (like Blender) to put together digital shows and immersive art experiences.
Projects I have worked on (which you can find out more about in my website) include an online photobook launch event with artist Linn Phyllis Seeger, a self-published hybrid magazine (Sivras, print, AR app, VR performances, a car racing video game as a way to explore Romeo Roxman Gatt’s performance, moving image, and sound work, a virtual dance platform for people living with Parkinsons (using volumetric captures ) in collaboration with Robert Bridger, amongst others.