A post of mostly questions, and no answers!
So I needed to do some IPC (Inter-process Communication) lately for shader compilers. There are several reasons why you’d want to move some piece of code into another process; in my case they were:
Bit-ness of the process; I want a 64 bit main executable but some of our platforms have only 32 bit shader compiler libraries. Parallelism. For example you can call NVIDIA’s Cg from multiple threads, but it will just lock some internal mutex for most of the shader compilation time.
↧
Inter-process Communication: How?
↧
On having an ambitious vision
We just announced upcoming 2D tools for Unity 4.3, and one of responses I’ve seen is “I am rapidly running out of reasons not to use Unity”. Which reminds me of some stories of a few years back.
Perhaps sometime in 2006, I was eating shawarmas with Joachim and discussing possible futures of Unity. His answer to my question, “so what’s your ultimate goal with Unity” was along the lines of,
↧
↧
Some Unity codebase stats
I was doing fresh codebase checkout & building on a new machine, so got some stats along the way. No big insights, move on!
Codebase size We use Mercurial for source control right now. With “largefiles” extension for some big binary files (precompiled 3rd party libraries mostly).
Getting only the “trunk” branch (without any other branches that aren’t in trunk yet), which is 97529 commits:
Size of whole Mercurial history (.
↧
Rough sorting by depth
TL;DR: use some highest bits from a float in your integer sorting key.
In graphics, often you want to sort objects back-to-front (for transparency) or front-to-back (for occlusion efficiency) reasons. You also want to sort by a bunch of other data (global layer, material, etc.). Christer Ericson has a good post on exactly that.
There’s a question in the comments:
I have all the depth values in floats, and I want to use those values in the key.
↧
Speaking at i3D and GDC 2014
I’ll be speaking at i3D Symposium and GDC in San Francisco in a couple of days.
At i3D, Industry Panel (Sunday at 11:00AM). Jason Mitchell (Valve) will do a panel on the scalability challenges inherent in shipping games on a diverse range of platforms. Panelists are Michael Bukowski (Vicarious Visions), Jeremy Shopf (Firaxis), Emil Persson (Avalanche) and yours truly.
My first i3D, can’t wait to see what it is about!
↧
↧
Cross Platform Shaders in 2014
A while ago I wrote a Cross platform shaders in 2012 post. What has changed since then?
Short refresher on the problem: people need to do 3D things on multiple platforms, and different platforms use different shading languages (big ones are HLSL and GLSL). However, no one wants to write their shaders twice. It would be kind of stupid if one had to write different C++ for, say, Windows & Mac.
↧
Visuals in some great games
I was thinking about visuals of the best games I’ve recently played. Now, I’m not a PC/console gamer, and I am somewhat biased towards playing Unity-made games. So almost all these examples will be iPad & Unity games, however even taking my bias into account I think they are amazing games.
So here’s some list (Unity games):
Monument Valley by ustwo.
DEVICE 6 by Simogo.
Year Walk by Simogo (also for PC).
↧
Shader compilation in Unity 4.5
A story in several parts. 1) how shader compilation is done in upcoming Unity 4.5; and 2) how it was developed. First one is probably interesting to Unity users; whereas second one for the ones curious on how we work and develop stuff.
Short summary: Unity 4.5 will have a “wow, many shaders, much fast” shader importing and better error reporting.
Current state (Unity <=4.3) When you create a new shader file (.
↧
Rant about rants about OpenGL
Oh boy, people do talk about state of OpenGL lately! Some exhibits: Joshua Barczak’s “OpenGL is Broken”, Timothy Lottes’ reply on that, Michael Marks’ reply to Timothy’s reply. Or an earlier Rich Geldreich’s “Things that drive me nuts about OpenGL” and again Timothy’s reply.
Edit: Joshua’s followup
In all this talk, one side (the one that says GL is broken) frequently bring up Mantle or Direct3D 12. The other side (the one that says GL is just fine and is indeed better) frequently bring up AZDO (“Almost Zero Driver Overhead”) approaches.
↧
↧
US Vacation Report 2014
This April I had a vacation in the USA, so here’s a write up and a bunch of photos. Our trip: 12 days, group of five (myself, my wife, our two daughters and my sister), rented a car and drove around. Made the itinerary ourselves; tried to stay out of big cities or hotel chains – used airbnb where possible. For everyone except me, this was the first trip to USA; I actually never did venture outside of conference cities before either.
↧
Importing cubemaps from single images
So this tweet on EXR format in texture pipeline and replies on cubemaps made me write this…
Typically skies or environment maps are authored as regular 2D textures, and then turned into cubemaps at “import time”. There are various cubemap layouts commonly used: lat-long, spheremap, cross-layout etc.
In Unity 4 we had the pipeline where the user had to pick which projection the source image is using. But for Unity 5, ReJ realized that it’s just boring useless work!
↧
Divide and Conquer Debugging
It should not be news to anyone that ability to narrow down a problem while debugging is an incredibly useful skill. Yet from time to time, I see people just helplessly randomly stumbling around, when they are trying to debug something. So with this in mind (and also “less tweeting, more blogging!” in mind for 2015), here’s a practical story.
This happened at work yesterday, and is just an ordinary bug investigation.
↧
Curious Case of Slow Texture Importing, and xperf
I was looking at a curious bug report: “Texture importing got much slower in current beta”. At first look, I dismissed it under “eh, someone’s being confused” (quickly tried on several textures, did not notice any regression). But then I got a proper bug report with several textures. One of them was importing about 10 times slower than it used to be.
Why would anyone make texture importing that much slower?
↧
↧
Optimizing Shader Info Loading, or Look at Yer Data!
A story about a million shader variants, optimizing using Instruments and looking at the data to optimize some more.
The Bug Report The bug report I was looking into was along the lines of “when we put these shaders into our project, then building a game becomes much slower – even if shaders aren’t being used”.
Indeed it was. Quick look revealed that for ComplicatedReasons™ we load information about all shaders during the game build – that explains why the slowdown was happening even if shaders were not actually used.
↧
Random Thoughts on New Explicit Graphics APIs
Last time I wrote about graphics APIs was almost a year ago. Since then, Apple Metal was unveiled and shipped in iOS 8; as well as Khronos Vulkan was announced (which is very much AMD Mantle, improved to make it cross-vendor). DX12 continues to be developed for Windows 10.
@promit_roy has a very good post on gamedev.net about why these new APIs are needed and what problems do they solve.
↧
Optimizing Unity Renderer Part 1: Intro
At work we formed a small “strike team” for optimizing CPU side of Unity’s rendering. I’ll blog about my part as I go (idea of doing that seems to be generally accepted). I don’t know where that will lead to, but hey that’s part of the fun!
Backstory / Parental Warning I’m going to be harsh and say “this code sucks!” in a lot of cases. When trying to improve the code, you obviously want to improve what is bad, and so that is often the focus.
↧
Optimizing Unity Renderer Part 2: Cleanups
With the story introduction in the last post, let’s get to actual work now!
As already alluded in the previous post, first I try to remember / figure out what the existing code does, do some profiling and write down things that stand out.
Profiling on several projects mostly reveals two things:
1) Rendering code could really use wider multithreading than “main thread and render thread” that we have now.
↧
↧
Optimizing Unity Renderer Part 3: Fixed Function Removal
Last time I wrote about some cleanups and optimizations. Since then, I got sidetracked into doing some Unity 5.1 work, removing Fixed Function Shaders and other unrelated things. So not much blogging about optimization per se.
Fixed Function What? Once upon a time, GPUs did not have these fancy things called “programmable shaders”; instead they could be configured in more or less (mostly less) flexible ways, by enabling and disabling certain features.
↧
Careful With That STL map insert, Eugene
So we had this pattern in some of our code. Some sort of “device/API specific objects” need to be created out of simple “descriptor/key” structures. Think D3D11 rasterizer state or Metal pipeline state, or something similar to them.
Most of that code looked something like this (names changed and simplified):
// m_States is std::map<StateDesc, DeviceState> const DeviceState* GfxDevice::CreateState(const StateDesc& key) { // insert default state (will do nothing if key already there) std::pair<CachedStates::iterator, bool> res = m_States.
↧
10 Years at Unity
Turns out, I started working on this “Unity” thing exactly 10 years ago. I wrote the backstory in “2 years later” and “4 years later“ posts, so not worth repeating it here.
A lot of things have happened over these 10 years, some of which are quite an experience.
Seeing the company go through various stages, from just 4 of us back then to, I dunno, 750 amazing people by now?
↧