Artomatixhttps://artomatix.com
Creative AIThu, 21 Feb 2019 14:16:47 +0000en-UShourly1https://wordpress.org/?v=5.0.3https://artomatix.com/wp-content/uploads/2018/05/cropped-artomatix-a-200-32x32.pngArtomatixhttps://artomatix.com
3232The Top Three Games of 2018 as Voted by Artomatixhttps://artomatix.com/the-top-three-games-of-2018-as-voted-by-artomatix/
Thu, 31 Jan 2019 13:10:09 +0000https://artomatix.com/?p=6821The post The Top Three Games of 2018 as Voted by Artomatix appeared first on Artomatix.
]]>

2018, the year the real world descended into chaos, and the world of gaming went back to the familiar with a good old fashioned reboot of the classics. Quite a few ‘new-old’ games popped on our radar. Crash Bandicootmade a popular come-back, the PS1 (PlayStation classic) returned, and a brace of ‘oldies-but-goodies’ got various levels of reboot. Two Monkey Island iterations were released, there was the excellent Fate of Atlantis, and Disney put a HD spin on Star Wars Episode 1 Racer. Along with all our old favorites coming back on the market, here are some of the other top games getting our attention and why:

1. Red Dead Redemption 2

Red Dead Redemption 2 – Incredible open-world gameplay with attention to detail in every facet of the game. I loved the first and didn’t think it would hold up to it, but the story alone had me hooked from the get-go.”- Geoff

2. God of War

God of War, primarily, is not a game about fighting. Certainly, it is a core gameplay mechanic that remains fun throughout, steadily improving as you develop, but it serves merely as a diverting window dressing that it is wrapped about a tightly wound narrative between Father and Son. It’s an emotional and engaging story. This is what drives the game. All of this is subsumed in astounding artistic merit, the scale and breadth of the world are sold at every turn – enough for you to forget that the environment you traverse is quite limited compared to Spider-Man.”- Fionn

3. Spider-Man

Spider-Man, on the opposite hand lives and dies on its gameplay. While GoW was no slouch in this department, it never is quite as thrilling as when Spider-Man cavorts through New York hanging on a web. The movement and animations are so fluid and easy to enact that you get lost in the mechanics. It perfectly sells the exhilarating experience of slinging and the joy of that cannot be overstated. I think the largest testament to this fact is even after you unlock the capability of fast travel I didn’t use it once, instead opting to journey my way to destinations manually.”- Fionn

4. Breath of the Wild

“Breath of the Wild, well, I was already a fan of the Zelda franchise (although, who isn’t?). Breath of the Wild really stood out from the other installments though because they took this ancient formula (Zelda has always followed a very specific gameplay/storyline formula that was feeling dated) and perfectly updated it using modern game concepts such as non-linear stories and open worlds (they actually took a lot of gameplay elements from Skyrim). I’d say the majority of the mechanics were no longer a Zelda game, but it still felt like a Zelda game at its core somehow. Anyway, besides being an incredibly detailed, polished and beautiful game with very few flaws. I’d say my personal favorite aspect of it relative to older Zelda games was the non-linear design. By that, what I mean is the old Zeldas forced you to play the game in a certain order: (1) Explore this area of the world map for an entrance to the main dungeon, (2) Sequentially solve puzzles in the dungeon to get to the boss, (3) Fight the boss to unlock access to the next area to explore and then repeat steps 1-3 until you reach the final boss.

The old Zeldas were good too, but if you got stuck on a puzzle or a boss, or just weren’t feeling like solving a puzzle and wanted action instead, you couldn’t do anything else in the game until you got through that hurdle. Breath of the Wild let you: (1) Explore, (2) Solve puzzles, (3) Fight enemies on your own terms whenever you wanted to. So I never really got bored with BotW, if I got sick of exploring, I could capture a dungeon, if I felt like battling a boss for better items, I could do that too whenever I wanted. It’s really a game that tailors its gameplay to whatever you feel like doing while also feeling productive, like whatever you feel like doing is also helping you complete the game.” – Eric

Okay, we know we said three, and we know we said 2018, but we just couldn’t help ourselves since we spent so much of 2018 playing Breath of The Wild. It’s also not surprising that God of War managed an impressive 3.1 million in revenue within 3-days, only to have that record beaten later in the year with Spider-Man managing 3.3 million in the same time. People seem to be just as impressed as we are! We look forward to another year of excitement with some considerable releases lined up for Activision among other big names. And for old games making a reappearance; it’s been exhilarating seeing all of the fantastic graphics remastered for modern times, and seeing photogrammetry the workflow of choice to create the 3D scenes in well-known games. Although, it’s a little crazy how long it’s taken for all of this to happen. If only there were AI software to speed up this process….

]]>NEWS: COMMENT ON STATEMENT FROM AN TAOISEACH AND MINISTER HUMPHREYS IN RELATION TO €75M DISRUPTIVE TECHNOLOGIES INNOVATION FUNDhttps://artomatix.com/news-comment-on-statement-from-an-taoiseach-and-minister-humphreys-in-relation-to-e75m-disruptive-technologies-innovation-fund/
https://artomatix.com/news-comment-on-statement-from-an-taoiseach-and-minister-humphreys-in-relation-to-e75m-disruptive-technologies-innovation-fund/#respondMon, 10 Dec 2018 15:12:23 +0000https://artomatix.com/?p=6500 Artomatix is pleased to confirm it has been selected as one of a number of remarkable companies granted funding by the Irish Government under DTIF, as per the statement […]

“We are naturally delighted to be recognised, along with our consortium colleagues, as one of the most promising technology companies coming out of Ireland by the Irish Government.This funding is hugely significant for us as it will allow us to build our team more quickly and invest in developing new products 12-18 months earlier than would otherwise be possible. This means we can stay ahead of our competition which includes some of the largest software companies in the world.Our team has developed a solution that will redefine how 3D content gets created over the next twenty years. We use Artificial Intelligence (AI), Neural Networks and other deep tech applications to automate 80-90% of the 3D artistic workflow.”

ArtEngine from Artomatix is an award-winning AI software solution for 3D studios with proven benefits for multiple industries, including gaming, automotive, interiors, textiles, fashion design and product design. As part of the firm’s plans to grow and scale, they will make a number of key technical hires to help accelerate their product roadmap for ArtEngine 2.0, including engineers, developers and research analysts. For more information visit the careers page.

“Oh, no, I’m lost”, not something one usually wants to hear themselves say when roaming through a rainforest. Feeling misplaced in unfamiliar ground sounds a little alarming in theory, but in practice; it’s incredible. Immersing yourself in a biodiverse world of trees, plants and animal life while losing track of time and sense of direction is not a usual experience for most. How often does one get lost in the rainforest, walk snowy mountains and windy deserts, discover ruins of an ancient civilization – all within the space of 10-minutes?

Soon, as often as you like – thanks to the folks at ZenEarth VR. Founded in March 2018 by a team of industry veterans, ZenEarth VR already has an impressive lineup of partners including Oculus, Playstation VR, VIVE, and Artomatix. By bringing people to real-life locations and allowing people to discover hidden treasures from wherever you are, ZenEarth VR believes they are “one step away from teleportation” and are incredibly excited to launch their first product with the help of Artomatix. This month, ZenEarth VR announced the news of the first destination available to the public: Belogradchik, Bulgaria. The VR studio has spent the past nine months preparing to launch and notes “there are no compromises when it comes to quality. Our philosophy is that our product must be the market benchmark for quality VR. The whole environment has to be immersive and interactive, even better than the real location.” By using a scan-based photogrammetry work-flow, the team has created high-detail realistic 3D environments in a virtual world that feels real.

How is Artomatix helping?

ZenEarth VR is using ArtEngine by Artomatix to transform their scan-based textures into production-ready assets. ArtEngine automates up to 80-90% of the time-consuming, repetitive tasks tied to photogrammetry and 3D art creation. In particular, the time it would normally take to groom the many thousands of scans required for a ZenEarth VR production has been largely eliminated, thus freeing up an enormous amount of time for the team to work on higher-value tasks.

Through automation powered by AI and neural networks, the ZenEarth VR team has sped up the whole development process and can launch much sooner than initially thought. Tasks that would have previously taken a week are now completed in 60-minutes or less.

Artomatix is a key factor for us – expediting the whole development process. It is one of the most important ingredients for us to scale rapidly and launch our product at a rate of 40% quicker.

–Georgi Georgiev, Executive Producer at ZenEarth VR

With ambitious goals, ZenEarth VR and Artomatix are working together to “bring tourism home. And make people feel as if they are present in the location.” Both companies are passionate about producing the highest quality output “to allow anyone around the world, regardless of their schedule, location or ability to travel, to feel fully immersed in a virtual world.” With the product launching next month, both companies are looking forward to demonstrating the groundbreaking technology to the world.

Company NameCompany Website*First nameLast NameTitleEmailPhone NumberNo. of artists in studio/project?CommentsConfirm that you are happy for Artomatix to contact you in relation to products and services

]]>
https://artomatix.com/zenearth-vr-uses-ai-to-help-a-scan-based-photogrammetry-work-flow/feed/0Direct on mesh synthesis of PBR materials now supported by ArtEngine from Artomatixhttps://artomatix.com/direct-on-mesh-synthesis-of-pbr-materials-now-supported-by-artengine-from-artomatix/
https://artomatix.com/direct-on-mesh-synthesis-of-pbr-materials-now-supported-by-artengine-from-artomatix/#respondMon, 13 Aug 2018 23:47:50 +0000https://artomatix.com/?p=5589The post Direct on mesh synthesis of PBR materials now supported by ArtEngine from Artomatix appeared first on Artomatix.
]]>

As part of their ongoing mission to disrupt and fundamentally change how 3D content is being created, following on from the launch of ArtEngine’s material synthesis functionality at GDC, Artomatix is delighted to announce that they now support the synthesis of full PBR materials directly on the surface of a 3D mesh.

Commenting on the launch, Artomatix CTO Dr. Eric Risser said

“This innovation is fundamentally different from the way texture artists apply materials onto their 3D models today. Current workflows are dominated by manual painting which is powerful but labor intensive, or there are a few planar projection techniques that can help speed things up, such as tri-planar, which are fast but can lead to repetitious features, seam artefacts, stretching and don’t really lend themselves to artistic controls. On-model synthesis offers a simple but powerful tool to sit alongside the others, where the artist can direct the high-level properties of the material, such as how it flows along the surface of the mesh as well as it’s size at any given position. By leveraging A.I. and an example of the desired material, on-model synthesis can generate a full PBR material over the surface of the mesh, taking UV space into account to create a new unique texture that looks organic while avoiding seams”.

On-model synthesis works great because it performs well not only on simple textures and models but also very sophisticated materials and shapes. To highlight this achievement, Artomatix has put together a small demo highlighting the technology in action. In this demo a fire-breathing gecko is perched on a charred tree branch. This model was textured through the manual painting of a flow and size map directly on the model and then multiple PBR materials were synthesized over the surface of the gecko model and blended together.

Artomatix has been working with some leading AAA studios over the past 3-6 months who are looking to fundamentally update the way they texture their models, their goal is to automatically add the richness and variety of the real world into their digital creations with minimal overhead or disruption to their artistic workflow. By pulling in A.I. that can only be powered by NVIDIA GPUs, Artomatix’s on-model material synthesis achieves this.

]]>https://artomatix.com/direct-on-mesh-synthesis-of-pbr-materials-now-supported-by-artengine-from-artomatix/feed/0Unity discusses Artomatix as part of their photogrammetry workflow.https://artomatix.com/unity-discusses-artomatix-as-part-of-their-photogrammetry-workflow/
Sat, 24 Mar 2018 21:45:59 +0000https://artomatix.com/blog/?p=140Cyril Jover and Mathieu Muller of Unity Technologies discuss their photogrammetry workflow at Siggraph 2017. Here they present the technology behind the de-lighting tool and how to get started with […]

]]>Cyril Jover and Mathieu Muller of Unity Technologies discuss their photogrammetry workflow at Siggraph 2017. Here they present the technology behind the de-lighting tool and how to get started with photogrammetry. You can see how they take advantage of the benefits of Artomatix (About 23 minutes into the presentation). For a detailed look at the range of features in the ArtEngine tool from Artomatix, head to the product overview page: https://artomatix.com/create-overview.php

A.I. seems to be in the news a lot these days. I am often then asked why do I think A.I. is gaining so much momentum compared to other emerging technologies? I believe there are several factors that contribute to this:

First, the internet + the “internet of things” has produced incomprehensible amounts of data. A.I. is the only tool capable of extracting a signal out of all that noise, for this reason, industry and government are both deeply invested in A.I.

Second, Deep Learning, which is the new buzzword for Neural Networks, a subcategory of A.I., has recently transitioned from a 50-year long science fair project to a breakthrough technology that is achieving state of the art results rivaling humans in several fields. A big part of this recent success is due to the availability of big data sets, which thanks again to the internet, can be used for training, and GPU’s becoming general purpose enough that they are now capable of running Neural Networks orders of magnitude larger and faster than was previously possible. One example is Nvidia, who has put considerable resources over the past five years into turning graphics cards into Neural Network supercomputers. It is, in fact, thanks to all of this, that things like self-driving cars, which would have been science fiction five years ago, are now a reality.

Third, I believe the most important reason why A.I. is taking center stage, is due to its wide application in so many fields. Like computers, or the internet, A.I. is not a technology that is been designed around a single purpose or application, it is a general-purpose “enabler” technology that can enhance and improve almost anything from biotech to architecture. Chances are that cancer and cold fusion are going to be solved with the help of an Artificial Intelligence in the future. On that note, Artomatix is applying A.I. to the creative industry, a field that has never utilized it before, and we are seeing some amazing things as a result!

This might make you ask if I am asserting that A.I. is mainstream now and has wide applications? To this I say, exactly! A.I. is not really Terminators, or some kind of Will Smith style iRobots. A.I. is in every aspect of your digital life already. For example – right now facebook’s neural network facial recognition is so good that if someone takes a picture of you and puts it in their newsfeed, Facebook knows you are in that picture, if you are not already friends, then Facebook can recommend you. Another example, your search engine queries are constantly being monitored by an Artificial Intelligence. They can act as virtual doctors and alert you to serious health conditions you might have such as a rare form of cancer if you happen to search for the right combination of specific symptoms… helpful or creepy, I leave it to you to decide.

This is why A.I. is now widespread for analytical tasks, but creativity and imagination is something that we like to think is still uniquely human. So how is it A.I. can be applied to the 3D art creation process?

Traditionally computers have been very bad at simulating creativity. The reason is that the common computer, i.e. the Turing machine, follows a very different computational model from the neural networks inherent to life on Earth. In many ways, computers are far superior to brains. The average CPU can do around three billion calculations per second, while the human brain maxes out at around 30. Computers are extremely good at onboarding, storing and processing huge amounts of data very precisely, whereas brains are relatively slow to learn new information, it’s often stored in a flawed and incomplete manner and our calculations are far from precise. That said, for many tasks including being creative, the human brain vastly outperforms the Turing Machine. The reason is parallelism. Computers only have two to eight concurrent processes running in parallel (i.e. they can only think about a few things at the same time), whereas the human brain is a massively parallel computer with millions of threads all running together. The brain also constantly rewires itself, making new connections and breaking old ones. Memories and ideas can meld together to create new experiences in our heads. It’s our imprecision that makes us so good at coming up with new ideas and that’s why computers have never really been applicable to the creative fields… until now.

Now you are probably asking yourself why would an artist want an A.I. to compete with them? I think this is a common misconception that A.I. will compete with or replace humans. The key is, A.I. can program itself to an extent, but at its core, there is going to be some fundamental metric that it uses to measure success, and this cannot ever change. This is actually true for any intelligence, including our own. For life on earth, our fundamental metric of success is survival and reproduction (hence why we are so afraid of competition). When it comes to A.I. however, we can start over from scratch. At Artomatix our A.I. measures its success by how well it helps our users. Its happiness is derived from your satisfaction!

There is actually a lot for artists to gain by embracing A.I. in their workflow. To start with, we can draw a lot of parallels with the evolution of the word processor. Originally people used typewriters to create documents, when PCs became commonplace in the early 80’s the typewriter was one of the first things to go, Computers gave people the ability to correct mistakes and make digital backups of their work to edit/re-print later. The creative industries also moved to computers for the same reason. Artists abandoned their paint brushes and drafting tables long ago for Wacom tablets and Photoshop.

That said, word processing has come a long way since the 80’s while art software has not. Spelling and grammar checkers are commonplace now. Autocomplete can often predict what we are going to write and finish text messages for us. Speech recognition has gotten so good recently that documents can now be dictated rather than typed manually.

The real value of computers is their ability to do work that only humans could do previously. Applying Artificial Intelligence to artistic tasks is kind of the same. In fact, we are already starting to see these “smart tools” emerge, such as Photoshop’s Content-Aware Fill, which is to images what auto-complete is to word processing. I think initially we’re going to see a lot of these “smart tools” pop up, which will help fix/auto-complete the really tedious and not very creative effort that goes into making 3D content. After that, we will see tools that can learn content and style from an artist and mass produce it. That is what Artomatix is currently focusing on.

In the long term though A.I. will likely enhance human creativity in more meaningful ways and lead to fundamentally new ways of being artistic. For example, imagine a system where you wear a helmet that monitors your brainwaves and you sit in front of a screen that shows you what you are thinking and feeling. Being able to see this on the screen will automatically trigger new thoughts and ideas which will, in turn, be fed back into the system. This would make a creativity feedback loop where ideas are quickly generated and refined with very little effort.

]]>Into the Cave with Pete McNallyhttps://artomatix.com/into-the-cave-with-pete-mcnally/
Fri, 16 Mar 2018 00:02:24 +0000https://artomatix.com/blog/?p=134Pete McNally is a Senior Designer/3D Generalist with over 14 years experience in the Irish games industry. He works with an Emmy award-winning tech firm, and he has also worked with […]

]]>Pete McNally is a Senior Designer/3D Generalist with over 14 years experience in the Irish games industry. He works with an Emmy award-winning tech firm, and he has also worked with a twice Oscar-nominated animation studio. We were really impressed with some work that he did recently using Artomatix as part of our Alpha group, which he subsequently published on his blog: https://petemcnally.com/

Here’s what Pete had to say.

“I had a partial scan that hadn’t resolved well it was wet rock on a very sunny day so large areas of detail were missed or were blurry. I baked out what I had in 3DSMAX, diffuse, normals, AO, height and a shadow map used to help with manual de-lighting in Photoshop. I ran these textures through Artomatix for seam removal and it tiled them quite nicely. After some tweaking, I applied the material to a sphere in Toolbag 3 and tested out some lighting environments, before applying the same material to the inside of a curved cylinder and tiling appropriately, to make the cavernous environment you see below. Not bad for a single material!”

How many times have you heard somebody say “enhance that!” in various cop shows over the years, as the good guys huddle over a screen looking at grainy footage, struggling to i.d. the bad guys?

Well now the ability to do so is just a neural network away and ironically those shows themselves are about to be enhanced!

Speaking of television, 90% of all video content ever created is now becoming obsolete by the standards of modern television. Netflix, Amazon Prime, Hulu and network broadcasters are struggling to find or create 4K content to keep up with the demand meanwhile, we as consumers have invested our hard earned dollars in ultra HD TV’s so our expectations are high!

As an example, the Lion King was Disney’s first movie to leverage computers and a digital workflow. The lines for each frame were still sketched by hand, but the drawings were scanned and then painted digitally using a computer. As a result, there’s no original source material that can be rescanned at 4K which is how this type of upgrade has traditionally been done. Certainly, back in the early 90’s, 2k seemed like a very future proof resolution to the decision makers at Disney. Oh how the times have changed.

The movie industry is littered with similar examples dating back as far as 30 years, where live action analog film has been fused together with digitally created visual effects in a symbiotic relationship that makes it impossible to simply rescan old content.

If we switch focus to the world of animation, rendering accounts for a large part of the time and cost that goes into a HD production. With the recent hike in demand for 4K content, this effectively quadruples the average $500k rendering budget for a 90 minute HD feature.

Animated television series aren’t just 90 minutes long, they’re 11 minutes times 52 episodes, or 572 minutes. With distributors like Netflix and Amazon making 4K a requirement, this means that either profits go down or rendering quality suffers.

How can neural networks fix these problems? Let’s walk through this process with a real world example. Here’s an input image:

The standard industry approach today is to use bicubic interpolation to increase the resolution of the image and then we sharpen it using signal processing. It’s a very subtle improvement, however as signal processing can’t add new details to the image, it can just emphasize the details that are already there.

This is the reason that video remastering is largely a manual process today and rarely performed, reserved primarily for use on high value content. Ideally it would be great if we could easily and effectively up-res and remaster both old and new video content, it just hasn’t been technically possible. Until now that is…

Deep Learning is the latest and most successful wave of Artificial Intelligence (AI). As the name would suggest, Artificial Neural Networks are inspired by mathematical models of the brain and visual cortex. They are composed of layers of neurons which transmit signals based on their connections. When shown enough examples these networks act as universal function approximators that are able to learn complex functions from input to output.

This new Deep Learning way of doing things can achieve something that was science fiction just a few years ago. Neural Networks can actually form their own imagination and hallucinate new plausible details where they didn’t otherwise exist in the original image and the difference is pretty striking.

The results are impressive and thanks to AI, this capability is now available to the wider market and not just large movie producers. The price point is well within any production budget for e-commerce, advertising or other short form pieces. Meanwhile, because the work has been automated using software and does not depend on human beings, the turnaround times have collapsed, making this far more practical.

To learn more about how Artomatix has pioneered the use of Deep Learning to remaster old content or up-res more modern footage, please contact us at info@artomatix.com.

]]>Interview with Gaming Bolthttps://artomatix.com/interview-with-gaming-bolt/
Sun, 10 Dec 2017 20:06:15 +0000https://artomatix.com/blog/?p=132In February of 2017 Gaming Bolt published excerpts of an interviewed with Artomatix Founder & CTO Dr. Eric Risser. You can read that here. However, there was so much good stuff […]

]]>In February of 2017 Gaming Bolt published excerpts of an interviewed with Artomatix Founder & CTO Dr. Eric Risser. You can read that here.

However, there was so much good stuff that they didn’t have room for, we thought we’d publish it here on the blog.

Q: Artomatix sounds like a dream come true for video game developers. Can you explain what it is and how it works?

A: AAA games studios spend a lot of time (and money) building big virtual worlds. Some parts of building these worlds are fun and highly creative, but other parts can become really tedious and time-consuming, such as removing seams in materials. Artomatix is a solution which automates these tedious and not very creative tasks. We accomplish this using neural networks, statistics and a whole lot of passion for the video game industry!

Examples of use cases we cover today:

Removing seams on organic or structured PBR materials;

Texturing big assets or open 3D worlds without obvious repeats in Unity or Unreal through a dedicated offering called Infinity Tiles;

Two very exciting features that Artomatix is working on are style transfer on 3D assets and 3D hybridisation. Style transfer applied to 3D assets will enable studios to recycle a lot of their old assets or to fix style mismatch they get from their outsourcers, and 3D artists around the world to use assets from repositories on a more frequent basis. As for 3D hybridisation, well, it allows people to generate infinity variations of untextured and textured meshes, which we believe will play a key role in AI-powered creation of 3D worlds… which we believe is the future.

Our current offering is cloud-based and runs in a web browser, due to the computationally-heavy nature of our tech. We’re working on an installable version of Artomatix, which will run on people’s computer and will need a minimal access to the Internet to handle the heavy processing. This version will handle batch processing of assets as well as many more use cases that we couldn’t get working on a browser due to its limitations.

Q: What differentiates the AI-driven building process of Artomatix from something like procedural generation?

A: If I were to put it in one word: Automation. Procedural generation is the process of making art through writing code. Once you write the code to make a specific art asset, you can make a lot of those assets with various randomized characteristics (which you program into the procedure), but there’s no “free lunch” in this way of doing things. You the artist are still doing a lot of work and spending a lot of time to author these procedures. In contrast our AI-driven process actually does all the work for you.

That’s the difference from a user’s perspective. From an engineer’s perspective it’s technically a completely different and unrelated thing. Under the hood our approach makes heavy use of statistics based machine learning approaches fused with neural networks that drive the creative process. Essentially we train a neural network on millions of images until it starts recognizing and understanding the atomic components that make up images and how those atomic components fit together. This is essentially the same learning phase that infants go through as they learn to parse the world. From here we can use this now trained neural network as a way of actually making images. The explanation of how this works is a bit technical for this interview, but at a high level the neural network is shown a texture or material and then it imagines new ones that share the same characteristics. On top of that the entire process is guided and controlled by statistical machine learning which directs the process towards some desirable user defined characteristics.

Q: How does the Infinity Tile technology help to provide freshness to the art? What are the minimum required art assets that Artomatix needs to really be viable?

A: An Infinity Tile is essentially a single ‘smart’ texture that internally consists of 16 sub-textures that all fit together, interchangeably. An algorithm is then applied to the tile (which comes packaged in our Unity and Unreal plugins) which will shuffle the UV space of the Infinity Tile, causing the 16 sub-tiles to be constantly re-arranged, thereby creating an infinite non-repeating pattern. This is a fusion of our AI driven art creation, which generates the 16 tiles, and a procedural approach which re-arranges them automatically.

Infinity Tiles allow Environment Artists to add texture to huge stretches of terrain without worrying about weird patterns or artefacts emerging. Considering Environment Artists have already developed work-around solutions for this problem, Infinity Tiles can be used to enhance these solutions, effectively eliminating tiling artefacts for good!

Considering Infinity Tiles simply re-arrange a textures features in a smart way, you get out what you put in! Of course, you can simply use the rest of the Artomatix suite to improve the quality of your textures before you use them as an Infinity Tile input.

Q: Which game engines currently support Artomatix and are you working with other developers to support their technology?

A: You can jump in and start using Artomatix Infinity Tiles with either Unity or Unreal. We’re in discussions with other engines, but everything’s still under NDA. We are working with AAA developers directly as well as pioneers of VR outside of games. Unfortunately our lips are sealed with NDAs.

Q: Can you tell us more about the technology’s Seam Removal and how it intelligently parses PBR textures together in a natural way?

A: Sure, the best way to talk about our Seam Removal technology is to frame it against the current approach that people generally use. If you’re an artist and need to remove a seam, you would use the clone stamp tool in Photoshop to copy and paste patches from the middle of the texture into the border regions, blending the borders of a patch in with the background texture. Procedural methods (known as “texture bombing”) are essentially a mechanized version of this process where patches of texture are randomly “bombed” all over the seams and blended in. This can sometimes work for highly stochastic textures, e.g. grass, asphalt… This approach fails on anything with structure, that’s where Artomatix saves the day.

Texture Bombing isn’t aware of the content of the texture that it’s working on. So it just does the same operation regardless of the input. In contrast, our method is “smart” in that it looks at the structure of the texture it’s going to synthesize, it learns the patterns and adapts itself to the structures it finds in the texture. During synthesis it doesn’t just copy patches, rather it re-builds the texture at the pixel level so that the overall look and structures will be statistically similar to the input. Our approach doesn’t see pixels the way humans see pixels, as colors, normals and displacements… rather our technology just “super pixels” containing all of this information at once, so when it builds new materials it does so taking all information into account. That’s the key to synthesizing new PBR textures in a natural way.

Q: To what extent can PBR textures be mutated? Can you tell us of some instances where new textures were created as a result?

A: With our Texture Mutation, textures can be organically “grown” up to 8k in size. In fact, this limit is entirely imposed by hardware limitations, so you may see even larger outputs in the future.

In any case, a picture is worth a thousand words, so here’s an example of our mutate feature in action. We created a new instance of this cobblestone texture, 64 times the coverage of the original as well as self-tiling as a bonus.

Q: What is the current outreach of Artomatix to smaller developers?

A: Ongoing and always increasing. I would even go as far as to say that most of our growth, understanding and development as a company stems from our relationships with smaller developers rather than the bigger studios we’ve dealt with. Smaller developers are all unique in their own way; each with a different dream of what they want to create. Speaking with them and understanding what we can do to help them best achieve their goals has always been a driving force behind our own growth and technological advancements, so I guess that you could say our outreach to them is pretty extensive.

Q: How much reduction in production overhead has Artomatix had for some developers? What kind of potential does it hold for future triple-A development?

A: The need for 3D content is about to experience a state change. The 3D market will almost triple in the next six years, which, coupled with decreasing unitary asset price, means people will need a LOT of assets. Tie this to increasing labor costs in countries that have relied on a cheap workforce to offer outsourcing services and you have the recipe for a big headache!

It is our firm belief that AI will play a key role in addressing this booming need for content. Need to texture an endless 3D world? Use our Infinity Tiles. Need to adapt your assets to a certain style? Use our 3D Style Transfer (in development). Need to create endless variations of certain assets? Use our 3D Hybridisation (in development). Implemented right, these features are bound to have a lasting impact on the industry, as it lowers the cost of 3D assets creation by an order of magnitude.

So to come back to the question: it’s not so much about reducing the production overhead as it is about addressing the strong need for content.

As for the potential Artomatix and AI applied to 3D content creation has in the industry: we believe it’s enormous. With the right ingredients and a lot of effort, we believe the industry will be able to create highly detailed, immersive 3D world in a fraction of the time it needs to today.

Q: What kind of performance challenges have you had to overcome for Artomatix’s smooth operation? Have their been any specific problems that developers have faced?

A: When we started, our core algorithms were implemented using a normal computer processor. To generate a single texture took roughly half an hour. We have since ported our code to run on GPUs, and we can now perform this same task in mere seconds.

The reason for this is that our algorithms involve performing millions of small calculations, across all the pixels in the image. When we run them all in parallel on the GPU, this is much faster.

Q: What are your plans for the future with Artomatix?

A: What wakes us up every morning is the desire to have a positive, lasting impact for as many people as possible with Artomatix’s technology. We believe we’ll reach this goal if we bring to market a radically new way of creating an immersive experience, one that hinges on curation rather than tedious, floor to ceiling creation.

To reach this big, ‘hairy’ goal we certainly have a few steps along the way, such as:

Release an Application version of Artomatix. Our v1 runs in a web browser using WebGL. Our goal was to get this technology out into the world with as few barriers to entry as possible. Unfortunately we had to make a lot of usability compromises when working in a browser. Moving forward we’ve been getting a strong demand for a more robust system that’s more integrated into standard production pipelines. We’re currently building a v2 of Artomatix that you can install on your machine. It will still be cloud based as the process needs really powerful GPU’s at certain points, but overall the system should be more stable, easier to use and a lot faster!

Asset recycling. This is a bit of a paradigm shift. Currently, when you’re making a game you have to make all your assets from scratch. A lot of games will have a unique art style, which means you can’t recycle old assets from previous games, they just won’t fit the style. Alternatively, if you’re making a realistic looking game, you still generally can’t re-use old assets because they’re low resolution or follow outdated standards. We’re working on two new technologies which are going to change all this, “Style Transfer” and “Super Resolution”. In the future, you’ll be able to grab 90% of your assets off of Turbosquid or the Unity asset store and import these assets into your game whereby the resolution and style is automatically updated to fit nicely in your virtual world.

Bring 3D Hybridisation to market. This is the ability to automatically create shapes based on examples. This is a technology that we’ve been developing for a while now which we believe is key to accomplishing our vision.

Q: With the PS4 Pro, Sony have increased the memory bandwidth a little bit, but they have kept the overall memory pool the same as it is on standard PS4 systems. Is this a fair trade off? Or do you foresee RAM becoming a bottleneck for game development as we move further on with this generation?

Q: The PS4 Pro has double the GPU power of the original. What kind of advantages this has bought in for developers?

Q: With the PS4 Pro, we now know the machine’s specs. What do you foresee being the biggest bottleneck to game development on the improved console? Would it be the CPU, which was always hamstrung even on the original PS4, but is even more so now, relative to the rest of the machine?

Q: 6 TFLOPs naturally means that the Xbox One Scorpio has an extremely powerful GPU. Assuming that the rest of the specs also see similar or comparable bumps, what are the kinds of graphical improvements developers will be able to deliver on Scorpio?

Q: The Xbox One Scorpio is being touted as the most powerful console ever made. And yet, given Microsoft’s diktat that all games have maintain parity with standard Xbox One systems, and that there can be no Scorpio exclusives, do you really think that the Scorpio’s power will be able to be put to any meaningful use?

Q: In a recent interview, Mark Cerny, the lead engineer of the PlayStation 4 Pro claimed that converting a base PS4 game to PS4 Pro version is just 0.2 Or 0.3% of the overall effort. What is your take on this? Do you think that the extra work required to develop an additional Pro version is actually bigger than the number quoted?

A: I believe these six questions all overlap in terms of answers, so instead of writing five terse yet redundant answers, I figured I’d pool all this into one big topic and try to give an explanation of what I think is happening here.

I believe that three recent technology factors or market shifts have led to these “Pro and Scorpio” upgrades. These are:

(1)The introduction of VR as a consumer technology.

(2)The widespread uptake of 4k TVs.

(3)The recent launch of a new generation of GPUs based on a much better/faster/cheaper underlying technology. This means that console providers can double GPU performance without increasing costs.

When gaming on a monitor, your console only needs to render one picture at a time. When gaming in VR though, your console needs to render two pictures at the same time, one for each eye. This is essentially the big computational cost of switching to VR, you need double the GPU. The only difference between the PS4 and PS4-Pro is a GPU with double the cores. Personally, I don’t think of the Pro as an upgrade, I think of it as the “Virtual Reality edition” of the PS4. While the Scorpio specs aren’t released yet, I suspect they’ll follow the same trend as the PS4… they’ll at least double the compute cores so you can play Xbox in VR.

So what does this extra processing mean for gamers that aren’t interested in VR? It could mean one of two things, you either keep the quality of each pixel the same and increase the number of pixels (i.e. 4k), or you keep the number of pixels the same and increase the quality of each pixel.

4k essentially has the same problem as VR, you need to render a lot more pixels. VR is a 2x increase while 4k is 4x the number of pixels a console was designed to render. The PS4 can’t do true 4k rendering since they only doubled the number of cores in the GPU. Instead they do a “checkerboard” upscaling approach to hallucinate the detail, which is probably a topic of discussion all on its own.

I think the interesting scenario is where players choose to use their “Pro” consoles on a 1080p screen, because now game developers can actually improve the look of their games. This is common in PC gaming where the player can choose their own “render settings” to choose how good the game looks.

Unfortunately, it’s too early to tell what the demand will be for better-looking console games and whether developers will bother to offer improved graphics. I think Mark Cerny’s quote that porting from PS4 to PS4 Pro will be a trivial effort is probably accurate. The only component that’s changed is the GPU and they’ve essentially just increased the number of cores rather than changing the underlying architecture. Assuming developers build their Console game on one of the leading commercial game engines (e.g. Unity, Unreal), they’ll have a lot of built in off-the-shelf options for improving the look of the game without having to do any additional work themselves. For example, anti-aliasing, motion blur, bloom effects, Parallax-Occlusion-Mapping are all graphical improvements that can be turned on or off just by flipping a switch in the code. In the PC world we give the players the option to choose these settings themselves based on what GPU they have and what framerate they’re comfortable with. I think in consoles it will be on the developers to choose the two pre-sets based on which console they detect.

Typically, when we see a boost to graphics it’s usually part of a new console generation which also sees an update on CPU and RAM. In general, a stronger GPU means you can display more stuff on screen, which means you need more memory to store that virtual world and a better CPU to handle all the A.I. and physics for this now more complex environment. I think one instinctual reaction is to see the GPU power double while all other specs stay the same and wonder if the rest of the console will be able to keep up. It’s important to remember that this GPU boost is just a reaction to VR headsets and 4k TVs and it isn’t intended to give developers the option to build bigger or more complex games. Once games are made that require stronger hardware (e.g. faster CPU, more RAM), they will no longer work on the original console, which would effectively make this a new generation instead of a new edition.

I realize there’s a lot of marketing hype surrounding these new upgraded versions of each console, but I think the improvements are very incremental and mostly just mid-term updates focused on supporting VR devices. If we look back far enough, we’ve actually seen this sort of thing happen before. In the early 90’s compact discs became mainstream and new consoles like the 3D0 were released around this technology, offering video cut-scenes and high quality music in games. In order to stay competitive Sega released the Sega CD add-on for their Genesis console. Nintendo was planning on the same and partnered with Sony to develop a CD add on for the SNES. Ironically, after a good deal of drama, Nintendo dropped this project and Sony decided to keep working on what would soon become the first Playstation.

The reality is that everyone was caught off-guard by the Oculus rift Kickstarter campaign kicking-off the rise of VR. Now Microsoft and Sony are in an awkward position. If they were to support VR with the original console they brought to market, players would see a big drop in graphics and probably migrate over to PC or worse, allow a gap in the market for a new direct competitor to emerge. In a perfect world VR would coincide with the end of a console cycle and they could just design the next generation for VR from the ground up. Unfortunately, at a mere 3 years old, neither console is ready for retirement yet. I think offering a mid-term “VR edition” is a sensible compromise. Of course, there’s going to be a

lot of marketing hype over TFLOPs and introducing the “most powerful console ever”, because that’s what marketing people do. Personally, I’m not taking the hype too seriously. ­­

Q: What is your take on Sony’s Checkerboard technique for 4K rendering versus native 4K rendering that Microsoft are espousing with the Scorpio? To the naked guy, what will the difference be? And what are the differences from a development and programming perspective?

A: The Checkerboard technique shouldn’t be too much trouble to program as it’s just a simple post-process. They essentially just render half the pixels in a checkerboard pattern and then fill in the blank pixels by blurring together the rendered pixels. People have been doing tricks like this for decades. As for how it will stack up against true 4k rendering. I honestly can’t say without looking at a few games being played side by side. Obviously, the quality won’t be as good, the question is if it will be noticeably worse.

As an expert on the topic of “upscaling” or hallucinating enhanced details from low-resolution images, I’m a little disappointed that Sony would go with something simple and outdated like the Checkerboard approach. I wish they’d worked with a company like Artomatix which has expertise on this topic. We’ve been developing upscaling technology utilizing neural networks which is years ahead of the traditional signal processing methods. Here’s an example of a 4x up-res (so 16x the number of pixels) we performed on Links shield.

Although the nerd in me is a little disappointed, I can’t argue with Sony’s logic for going with a signal processing strategy, they’re reliable and safe, it’s what people have been doing for years. That said, I think in the long run, better quality and performance can be gained by using neural network upscaling.

Here at Artomatix we’ve developed our upscaling technology for a slightly different application. Our focus is upscaling old game assets so they can be re-used in new games, rather than being re-made from scratch. For games like Tomb Raider as an example, the developers have invested a lot of time and money building a library of textures: tree-bark, rock, grass, etc. which they mostly have to abandon when moving to a new console generation that can handle higher resolution textures. The idea with our upscaling technology is to recycle those old assets for the next generation. Also, classic games like Ocarina of Time or Shadow of the Colossus could be remastered with better game art, not just rendered at a higher resolution.

We’re also exploring our upscaling approach for old video footage. Imagine if the Simpsons from the early 90’s could be automatically re-synthesized to look like the latest season. I think the same idea could be used for consoles in the future. By using advanced real-time upscaling techniques, it should be possible to actually decrease the number of pixels that are currently being rendered, while getting higher resolution at a better visual quality. Our peers at Magic Pony Ltd. were working on a similar idea for utilizing neural network upscaling on streaming internet video, they were recently bought by Twitter for $150m.

Q: What are your thoughts on the Nintendo Switch? What unique challenges will a system like that would pose for game development?

A: I think it’s still a little early to form an opinion on the Switch. At the moment, I’m more surprised by it than anything else. Traditionally consoles resemble mid-range PCs when they launch. The Switch however looks more like a beefed up tablet… which has huge implications for developers.

When you make a game you generally want to support as many platforms as possible, because that gets you access to the most customers and thus the highest return. PS4, Xbox One and PC are all very similar machines under the hood so if you’re making a game for one, you generally make your game for all three. With the Switch running on mobile hardware and probably the Android OS, I can see developers treating the Switch as a mobile device rather than a console. I can see a lot of Switch games probably getting re-leased for Android devices rather than PC… assuming Nintendo doesn’t try to block that legally.

Q: Do you think Nintendo Switch being less powerful than PS4 and Xbox One will matter in the long run for the new console?

A: I don’t think it matters. Nintendo hasn’t been winning the graphics race since the N64. I think the Wii was significantly underpowered relative to the PS3 and Xbox 360 without gamers being too bothered by it. Nintendo finds other ways to offer an entertaining experience and I for one really appreciate their willingness to innovate.

The one thing I’ll say is that Nintendo does take a big risk when they try something radical and new, sometimes it pays off (e.g. the Wii) and sometimes it doesn’t (e.g. the Wii U). They’re making a big bet that gamers will want a console/tablet hybrid thing, while also risking their relationship with 3rd party developers by making a console that’s weird and thus harder to develop for. The Switch could be a great success or a huge disaster… I guess it all depends on how well they execute on the concept.

Artomatix and Texturing.xyz

Texturing.xyz has a reputation for excellence in providing ultra high quality, photo-realistic textures to make digital human textures. Their work features heavily in award-winning movies, and games.

The goal of a modern scan-based workflow is to copy and paste objects from the real world into the digital one. Thanks to scanning, virtual worlds are achieving new and unprecedented levels of realism, while also keeping project budgets and timelines under control. The video game and movie industries are embracing this new method of content creation, with more and more titles leveraging scanned content.

Why Now?

Scanning has always existed, but historically there have been three significant barriers to widespread adoption.

1 | Availability of high-quality real-world scans

2 | Affordable tools for capturing raw scans

3 | Economic means of grooming raw scans into render-ready assets.

Built on neural networks that have been refined over many years, the Artomatix platform uses artificial intelligence to mimic the work of an artist and thereby automate these slow, repetitive and mundane tasks, otherwise carried out by the artists themselves. What would previously have from hours to days to complete in the case of a highly complex texture with multiple maps, can now be completed in a matter of seconds through Artomatix making this new business paradigm possible for Texturing.xyz.

The vision for the future of 3D asset creation, shared by Texturing.xyz and Artomatix, combined with close collaboration in the production of these groomed assets, has enabled the launch of what is a truly unique offering. Until now, it has simply not been possible to offer such a wide variety of pre-processed textures of this quality at a price point that was affordable. The sheer man-day effort associated with processing each individual texture and then producing multiple variants of the same texture for re-use over and over again, made this utterly impractical. In launching this new proposition, Texturing.xyz have clearly demonstrated their leadership position in this space.