top of page

Interviews

About Audio

Integration

With Several Industry Veterans

Dan Cermak

Former GEneral Manager of Deep Silver Volition

previous projects include Saints row series

Screen-Shot-2017-08-15-at-12.28.52-PM-21

Background on Deep Silver Volition

1993: Founded as Parallax Software

​

1996: Divided into Outrage Entertainment and Volition, Inc.

​

2000: Acquisition by THQ

​

2013: Acquisition by Deep Silver

​

How Big Is Deep Silver's Audio Team?

614132.png

Deep Silver's audio team at any point was on average about 3-5 people. This number always got a little smaller between team leads being taken for meetings. overall the audio team was very small.

​

Does your team use middleware? Which one? And For which games?

Yes, volition has actually used middleware on many of their recent games, including the saints row series and the red faction series. Volition specifically uses wwise at the moment. I personally believe this one is more industry standard than other software such as FMOD, but that's all developer preference.

How did your team create their sounds?

Volition had a lot more sound designers than foley artists. They were more focused on just making sure the game sounded good. On top of this, we had assistance from both WB and outsourced sound studios when it came to creating audio.

Damjan Mravunac

Sound Designer/Composer for Croteam

Previous projects include the Talos principle

K0fSs_6u_400x400 (1).jpg

Background on CroTeam

1993: Founded in Croatia

​

2001: release First serious sam game

​

2014: release The Talos Principle

​

How was your audio created? Do you create your sounds from scratch or do you record live sounds?

It’s a mixture of both. We do recording sessions in my studio that I’ve personalized to my particular needs. Some might call it a personal den, a man cave. And I won’t disagree. It’s a place I feel most comfortable in, and a place that helps my creative process. But, I digress.

 

Many sounds are created from scratch. Or at least have been at some point in time. I’ve been in this business for a long enough time that I have a huge catalog of sounds waiting to be used. We often do recordings with other team members as well. For example, the infamous kamikaze scream is an actual (albeit modified) scream by our CEO and founder Roman Ribaric mixed in with some library screams.

 

With that being said, we do mix and match where we see fit. I guess there are no rules. We make tons of sounds from scratch, live, and mix. That’s the best answer I can give.

croteam-3.jpg

How was your audio implemented for the talos principle and serious sam?

We don’t use middleware as we have our own internal engine development team, who
maintains and develops Serious Engine, the tech behind Serious Sam and Talos Principle. All audio systems that we have in place have been programmed in-house.

 

Alongside having all standard sound features like other popular engines, we do have some dynamic stuff that monitors events on screen as well as player’s progress and changes music accordingly.

 

The process is top-down oriented. I come to the programming lead with my ideas about final outcome - how sound and music should work and play at certain parts of the game - and the lead then translates this into programming mumbo-jumbo and converts my desires into assignable tasks. After features are programmed, we do some iterating and tweaking and that’s basically it.

Have you ever had to use audio middleware on other projects?

As mentioned, we don’t use middleware. The thing is - you never know how long you’ll have the support for middleware, what will happen if the team behind it stops developing it, what happens when you encounter bugs, etc. And middleware comes with so many features that it’s easy to get lost and lose track, so it’s easier and faster for us to add stuff ourselves.

How large was your audio team?

The audio team is me, myself and I. I make sound effects, compose the score and just about
everything else, including in-engine implementation. We did work with other audio folks in the past, but it was on a case-by-case basis. In this business, unless you are a really huge company, you don’t really have an audio team, but use outsourcing partners.

 

We’re lucky enough to have several projects in development at any given time so there’s a lot for me to do. But we wouldn’t benefit a lot from having a multiple-member team. So I do all the heavy lifting. It’s fitting, I guess. I’m a big guy.

Andrew Lackey

Independent Sound Designer

Audio Director for Ori and the Blind Forest

Andrew Lackey Golden Reel Award for Trea

Background on Moon Studios

2010: Founded in vienna by thomas mahler and gennadiy korol

​

2011: Signs distribution and development deal with microsoft game studios

​

2015: Release debut hit, ori and the blind forest

Ori's sound design is notoriously well-crafted and polished. what was the audio process?

0.png

Ori was an interesting project because the bulk of the team didn't meet each other until launch day. The entire project was developed remotely in Unity. The audio team consisted of gareth coker, the composer, and three other sound designers. The game tech team helped a lot with implementation.

And what exactly did that implementation consist of?

well we didn't use middleware. Like I said, the entire project was developed in Unity, which kind of has its own audio system, but a lesser one, without a lot of parameter tuning. we needed more than that, especially for the level of complexity to which gareth was going to integrate his compositions, which is why a lot of the audio groundwork for the game was custom built by both the game engineers and us, the sound team.

Have you ever used Middleware?

It really is developer preference. A lot of other projects that I've worked on have also wanted to build their systems up from scratch, like the witness. Of the times I have used middleware though, I find that I have used wwise more than I use fmod.

Jonathan Wachoru

Former Sound Designer for Red Barrels Inc.

Previous Projects include Outlast

Screen Shot 2019-11-19 at 5.20.03 PM.png

Background on Red Barrels Inc.

Red Barrels Inc. is a Canadian video game developer based in Montreal. The company was founded by Philippe Morin, David Chateauneuf and Hugo Dallaire in 2011

download.png

How was your audio created?

so it's really depending on the sound to create and on the importance of that sound. And by importance I mean, is it something the player will hear often? Is it an important feedback? Is it an iconik sound for that project?
Depending of the answer I will adapt my approach. I mean, yes, I prefer using my own stuff, going outside (or not) recording objects, textures to re-use after. But, during the dev of Outlast, I wasn't on that kind of mindset. I was younger than today, with less experience, less gear (microphones, recorders, props) so I often went for the faster-the better. I know it's a kind of shame but, I was more focus on the interactive systems, and how implement all of these than what sounds should be better.
I think it really depending of the preoduction process and the time accorded to do your job.
But right now, I am more experienced, few games behind me, and I've built a kind of processing-pipe when I need to work on some sounds. I ask myself the previous questions, then depending on the answer I will start a kind of pipe. So, if I have time, I will try to experiment with new material to record. If I have less time, I will try to use stuff I've already recorded. And if I have a tighter time, I will check on commercial libraries to find materials to start working with. After that, I start creating sounds with my "ingredients". One thing to know, there is no shame to use commercial libraries. ;)

How was your audio Implemented for rdr2?

Tough question but interesting indeed! It's been awhile but I know I've built my expertise because of that project.
Outalst has probably been the hardest project I've had to work on. Not because of the nature of the project it-self but because I was young with a lot to still learn with no mantor to show me the right path. Then I've done lot of mistakes. A LOT! And essentially in the implementation side. The sound of Outlast has often been greatly received by press and players, but in term of integration there's a lot of crap inside.
In term of optimisation first. Wasn't really experienced in this topic, so if I hear the sound in the game, I was happy and didn't ask myself if there were other ways to do my systems more easilly and more optimized. Working on several projects after that shown me way to simplify my ideas and making them less CPU consummer.
The dynamic system I am the more proud is what we called "Stress Breath". It sounds like a gum brand name! haha! And yes I remember when I exposed this idea of getting a dynamic breath for the main character, the RedBarrels team wasn't very hyped. The thought it will break the immersion with the game. My argument was in the opposite, that will create a "connexion" with the character and making it more human and vulnerable. And, (second argument) you will be able to use this breath as a feedback feature helping the player to determinate how close is an ennemy (because utlimatly, the breath becomes faster depending on the proximity with an ennemy). After some test in game, they all loved  the result. And now, you hear breath stress in all horror games! The concept of this breath system was pretty simple. In the game you have 3 main "states". One state when you're exploring, one state when you're chased, one state when you're hided. So, we've had to record various breath cycle, with increasing intensity (3 intensity/speed) for each "game state". For exemple : you are detected by an ennemy and you start running; your "runing breath" will be different if you are detected or not. Same thing if you are under a bed. You could put yourself under a bed withou being detected. But if you are, your breath will adapt with the proximity of an ennemy, like if the character would try to not being heard by the ennemy, and when he goes away, you can hear your character release his breath... it is subtle but worked very great, and it increase the fear at the top. 

Joe Cavers

Former Sound Designer for rockstar games

previous projects include red dead redemption 2

1554213625Cavers_Joe_Vertical.jpg

Background on rockstar games

1998/1999: Established/formed as subsidiary of take-two interactive

​

2013: Releases grand theft auto 5, 3rd best selling video game of all time

How was your audio created?

For a large majority of my work (and for more than I'd like!), I will use preexisting libraries to source sounds, then edit and process from there. The more "design-y" the sound, the more I'll have to edit and process. If I have time or can't find the specific sound I need in my library, I'll record my own sounds and use those.

11-111621_rockstar-logo-png-rockstar-gam

How was your audio Implemented for rdr2?

This is indeed a huge question! At Rockstar there is no middleware, everything is proprietary. The workflow isn't very different than other studios using middleware though. I think it is fair to say that nowadays, there is almost always some form of interface for a sound designer to work with. Whether this is a separate bit of software like Wwise, or something that's integrated into the engine (Fabric in Unity, for example), the sound designer will usually have a front end through which they can implement sounds.
 
Broadly speaking, the workflow is as follows:
    - Make your sounds in DAW of choice
    - Add those sounds to the engine/middleware/tool of choice. For example, in Wwise, this can be done by dragging your .wav files into the project hierarchy.
    - Use tools/frontend interface to create appropriate playback behaviours and relevant files for engine to communicate with audio tool. The details for this part will vary hugely depending on the tool and project.
    - Hook up the sounds in the engine, outside of the audio tool. Again, this varies so hugely project to project, even for projects using the same tools!
    - Test your sound!

Have you ever had to use middleware on other projects?

I have previously used Wwise a lot (my second job in the industry used Wwise, as do a couple of my current projects). I've used FMOD a tiny amount in the past, but not for a long time now.
 
Middleware is very much used in the industry. It usually stems from developer choice (at least, in my experience). The audio tools that come with game engines tend to be basic/limited and if you want to do anything more complex, you either need to buy a tool that does this i.e. license middleware, or write the code yourself.
 
Audio programming is an enormously deep and complex field and unless you're team is investing in their tools pipeline for the long haul (like Rockstar have with their proprietary tools), it's unlikely they'll have the resources or want to dedicate/hire resources for this task.
 
Middleware provides a very large, evolved toolset that allows you to do a lot from the moment you have it. This doesn't mean it can do everything but overall, it will save a lot of time when compared to the alternative.

How large was your audio team at rockstar?

When I was there, the team was roughly around 30, spread across many different studios. We didn't have a composer who was a Rockstar employee but I believe that the composers on that project worked regularly and closely with the music supervisors. Rockstar's situation is extremely unusual (as are their sales figures!).
bottom of page