Author Topic: It's a good time for VR, since one can have HMD for $20+smartphone. I'll show.  (Read 7668 times)

0 Members and 1 Guest are viewing this topic.

Offline ksotar

  • 26
It's a good time for VR, since one can have HMD for $20+smartphone. I'll show.
I have found several threads here about VR. Almost all of them ended with 2 main statements:
  • developers don’t have/can’t afford VR gear to test this feature;
  • VR will not be implemented until it can be done properly, that means after engine will be properly optimized.


First, I’ll show my VR experience with Freespace 2, which I see with my low-priced smartphone (Xiaomi Redmi 4 prime, 5” FullHD) inside BoboVR z4 mini VR goggles. Video was recorded on the smartphone itself, so as you may guess if I’m not recording, it all runs smoother. I believe this video can be watched in other HMDs to see the 3D effect.


I have developed 2 conclusions while testing this:
  • cheap alternative to Oculus, HTC Vive etc. is real, at least in space sim genre (where your character mainly sits in a chair just like you do in front of your PC);
  • it is widespread overstatement that for VR you necessarily need much more powerfull PC then for the standard mode (more on this will follow).

So there is a way to run many games for Windows in Side-By-Side (SBS) stereo mode even if they don’t support it natively, and you even don’t need paid software like TriDef. There is a tool, Reshade, that allows to inject your own shaders in Direct3D or OpenGL, and it also grants you access to the Depth buffer. It appears to be enough for the stereo 3D. Quote from Nvidia:

Quote
Traditionally, VR applications have to draw geometry twice -- once for the left eye, and once for the right eye. Single Pass Stereo uses the new Simultaneous Multi-Projection architecture of NVIDIA Pascal-based GPUs to draw geometry only once, then simultaneously project both right-eye and left-eye views of the geometry. This allows developers to effectively double the geometric complexity of VR applications, increasing the richness and detail of their virtual world.

I’d say: or otherwise, we can have VR with the same geometry complexity and little overhead compared to usual game (and not necessarily with Nvidia). And there is a shader that implements a likewise technique: Depth3D. Being properly configured it can provide quite decent Stereo 3D quality, with a really little overhead.

My setup
I plan to update this post with more information on my setup, so anyone could went VR on a cheap with Freespace. For now I just attach the principle architecture:
https://www.lucidchart.com/publicSegments/view/ddfb81af-e5eb-4776-b7f8-c8c7896eee0f/image.png (IDK how to change image size here, so it is a link).

Problems
As you can see we can have some decent VR experience with Freespace even without native support. Current Engine is more than enough. But of course there are some problems, and things could be better.

First off all, most phone HMDs have 16:9 aspect ratio, but images placed SBS will be at 8:9. So if the game runs in 16:9 natively, we either will have black stripes on top and bottom or will break AR, or will lose resolution. I describe it in detail here. Or there is a way that I implement: run FreeSpace in 1920x2160 (it’s 8:9) resolution then it all will become as in the aforementioned video of mine. My GTX 750 handles it fine, but my laptop - unlikely. It is quite a waste, of course - render in 1920x2160 when we only need 960x1080.

I have a roadmap in my head of what could be improved even more, but one thing at a time.


What could help right now?
If we could run Freespace in 960x1080 but stretched to 1920x1080 window it will allow to run Depth3D variant VR with a decent performance boost. I believe many old notebooks that are capable of running Freespace could handle 960x1080 as well.

Secondly, just adding native SBS support in Freespace will allow “true VR” for those who can’t run Reshade. And likely it will not be much overhead either, to render 960x1080 twice, compared to 1920x1080 once. From my experience, it is already enjoyable, so if there is an easy way to implement it, it is definitely worth to do, not waiting for some proper time.

Does anybody have thoughts on how to do it, or could help with it?

Any questions also welcome.
« Last Edit: November 23, 2017, 05:20:55 am by ksotar »

 

Offline ksotar

  • 26
Re: It's a good time for VR, since one can have HMD for $20+smartphone. I'll show.
Well, I tried to hook the WindowCreate call, and with some success. Game actually starts in 960x1080, then window hook makes window size to 1920x1080 (that apparently adjusts Backbuffer size as well). Then I modified Depth3D to draw Left and Right eye images side-by-side without squeezing them.


Good thing is that my GPU load went down from 100% to ~33%.
Bad thing here is that we have lost Stereo 3D effect - there is no depth in that video in fact. It must be because Depth3D shader looses correct connection between Backbuffer/DepthBuffer and/or their sizes. That means I need to figure out, how Depth3D actually works etc. It happens to be not so easy, and its author has no time to help me.


So, if I am to figure out how to draw SBS renders, I'll better try with FSO itself. And here I stuck too. I wanted to start with window size of 1920x1080 but to override viewport to 960x1080.

Basically every time there's a call to glViewport you can duplicate the renderer to split it into each eye. I used to have code to do that. I need to reimplement it and update it. My thinking was implement two viewports that split the screen in half then figure out howto turn the two views slightly, one to the left, one to the right to simulate the eye offset. Then just had to implement barrel distortion.

Well, I tried that. Looks like introducing SDL2 library lead to some changes in code, perhaps. I found only one call to glViewport in 3_8_0 version. It is in gropengl.cpp at line 1574. I tried to pass width = 960 there. That only led to half a screen rendered in start menues (no change of renderer to w960). And then I managed to start a flight, it appears to bee in fullscreen 1920x1080. So something else was called, what has changed the resolution back, I can't even figure out what.

Can someone to point me to the right direction?
« Last Edit: December 02, 2017, 10:15:24 am by ksotar »

 

Offline ksotar

  • 26
Re: It's a good time for VR, since one can have HMD for $20+smartphone. I'll show.
I have the following information now:

Quote from: m!m
If you mean framebuffers then you are correct and FSO uses them for the entire scene (which is also the reason why you can use the depth buffer in your shader). glViewport simply tells OpenGL where the render output should be sent to on the screen.
On current master the relevant calls are in gropengldraw.cppLine 698, gropenglpostprocessing.cpp Line 194 and gropengltnl.cpp Line 602.
gr_set_viewport is only called in matrix.cpp at the moment.

Will try to move on.

 
Re: It's a good time for VR, since one can have HMD for $20+smartphone. I'll show.
Are you offsetting the two cameras so they're looking at different points? This URL contains the mathematics needed: https://www.quora.com/What-is-the-maximum-angle-a-human-eye-can-see Each eye should be turned away from the center by 15 degrees. So left eye should be set to 345 degrees and the right eye should be set to 15 degrees. The arc of vision for each eye should overlap with the other eye.
wrong answer you see hiding in that asteroid belt are 6 dralthi fighters, a Kilrathi gangbang. -Devereaux

  

Offline ksotar

  • 26
Re: It's a good time for VR, since one can have HMD for $20+smartphone. I'll show.
First, I'll update on what is the current state.

I believed, that graphics subsystem should be constructed like that: Geometry and transformations->Polygon rendering->Post processing->Screen draw.
So, if I'll take my "Plan minimum", that is, to render in 960x1080 resolution, but stretch this image to 1920x1080. That way Reshade+Depth3D could step in, and it will be an already playable result. And that stretching could be easy implemented on the last step, "Screen draw".

But it appeared to be more convoluted than that. My impression is that graphics pipe in FS is quite not like that. There seems to be not enough encapsulation and  quite many dependencies on several instances like "max_screen_width", "window_width", "clipping width" etc. (I'm writing by memory and can be wrong with exact naming), some of functions on the other hand, don't use these values but instead query subsystems for current window size. So, it appears not that easy. I was surprised to see the functions based on glViewport to set rendering area there and again in postprocessing etc. (even for single textures, I believe those are used as quasi-buffers). Chains of redefinitions that looks like simple renaming to me, also left me wondering, for example: gr_set_viewport<-gf_set_viewport<-gr_opengl_set_viewport<-glViewport and still using glViewport, but ok.  In the end I couldn't find exact "Screen draw" phase, which should be the only place to implement stretching. Which is to my surprise, because in all OpenGL guides I encountered, this viewport transformation  is performed just with a single glViewport call, following the logic of this scheme:




So I decided to try to go for all those loosely coupled places where sizes of render region are set.
I went for glViewport calls, halving width, like this: glViewport(0,0,gr_screen.max_w/2,gr_screen.max_h);
I got this:



Here we can see that only HUD rendered to the expected half of the window, but it has wrong AR. And seems like we have "progressive stretching", my guess that "width = width/2" is being passed down the chain. In the leftmost part we have something totally fishy, and I think that is postprocessing with wrong sizes and progressive squeezing. And of course mouse cursor was out of it's place as well.

I thought that it is too much of change points for a task that looked simple enough at start and decided to go the other way, as with window hook that I used. So I started the game in 960x1080 and instead changed the call of window creation:
    SDL_Window* window = SDL_CreateWindow(props.title.c_str(),
                                          x,
                                          y,
                                          props.width,
                                          props.width*2,
                                          props.height,
                                          windowflags);

So I got 1920x1080 window. Video intro, pilots profile menu and main menu appears stretched to full screen size (just what I wanted, and that reveals that some graphic functions are dependent on actual window size, not on stored values). But when I started mission, its loading screen and in-flight screen are 960x1080 again. HUD and all graphics are in perfect sync of course, but it seems that to stretch this screen to the full window is the same difficult task as in previous variant.





I just can't take the thought out of my head, that there should be some screen buffer swap operation or something in which it can be performed easily.


But OK, we have perfect picture at a half of the screen, we may consider to place another viewport for the right eye alongside it, and have true SBS. And here we need second camera, etc. I think that there is something wrong with the idea to set cameras with 30 degrees difference, it should be far less to my expectations. Moreover not only we need cameras to be angled, but we need an asymmetrical frustum. More on this could be found here.




But for now I'm not at this level of understanding of how FSO rendering works (as you can see from my results). Any help would be appreciated.
« Last Edit: December 18, 2017, 03:59:42 am by ksotar »

 

Offline m!m

  • 211
Re: It's a good time for VR, since one can have HMD for $20+smartphone. I'll show.
You seem to have the wrong idea about how OpenGL works. the "Viewport Transformation" in your image is done for each vertex and does not have to do much with the glViewport() call. The OpenGL function simply supplies the values for the viewport transformation and nothing more.

I also don't know what you mean by "Screen draw" phase. Everything between two gr_flip() calls is drawn to the screen so I don't know what exactly you mean. If you want to find out more about how FSO renders stuff to the screen you could try running it in RenderDoc which will record all the rendering operations FSO does and display them in a nice tree view for you. FSO also supplies some scope information to make it easier to follow what exactly happens.

I believed, that graphics subsystem should be constructed like that: Geometry and transformations->Polygon rendering->Post processing->Screen draw.
So, if I'll take my "Plan minimum", that is, to render in 960x1080 resolution, but stretch this image to 1920x1080. That way Reshade+Depth3D could step in, and it will be an already playable result. And that stretching could be easy implemented on the last step, "Screen draw".
What exactly is "Geometry and transformations"? The way you describe it is mostly how FSO renders stuff but there are a few steps in between (e.g. shadow map rendering and HUD drawing).

[...] Chains of redefinitions that looks like simple renaming to me, also left me wondering, for example: gr_set_viewport<-gf_set_viewport<-gr_opengl_set_viewport<-glViewport and still using glViewport, but ok.
The reason that "renaming" exists is so that we can have multiple rendering backends which can be chosen at runtime. In addition to gr_opengl_set_viewport there is also gr_stub_set_viewport which does nothing and is only used by the standalone server which does no rendering. This mechanism is pretty old (it was present in the original source code release) which is the reason why it doesn't use a more object-oriented approach which may be easier to understand.

 

Offline ksotar

  • 26
Re: It's a good time for VR, since one can have HMD for $20+smartphone. I'll show.
I can see it now. I was wrong in that I assumed there'd be a phase when resulting image is mapped to the "viewport" allowing to resize\stretch it (like in numerous examples about glViewport). But instead it is all layered up (HUD, shadows, etc.) in Backbuffer, which is logical enough if you're not planning any windows resize, etc.

I have found the exact call, where buffer flip occurs: game_flip_page_and_time_it

So, now it seems to me that I should do some half Backbuffer->Texture copying and then render it to some fullwindow textured quad. Or maybe some glBlitFramebuffer to itself. But gl calls are unavailable on this level, and I'm little bit lost in all those wrappers. What do you think, how should I proceed?

Quote
The reason that "renaming" exists is so that we can have multiple rendering backends
I got it now. But that gr<-gf<-gr sequence was quite confusing. I thought prefixes in functions names mean some level of abstraction or a subsystem or a module. And if it is being tossed back and forth it makes me muzzy.

Quote
"Geometry and transformations"
I gues I meant Vertex manipulation level. But as it has cleared up a bit, doesn't matter :)

Quote
RenderDoc
That's just great tool according to screenshots and description! But for now I was unable to run FSO with it, it says:



Again, thank you very much.

 

Offline m!m

  • 211
Re: It's a good time for VR, since one can have HMD for $20+smartphone. I'll show.
I can see it now. I was wrong in that I assumed there'd be a phase when resulting image is mapped to the "viewport" allowing to resize\stretch it (like in numerous examples about glViewport). But instead it is all layered up (HUD, shadows, etc.) in Backbuffer, which is logical enough if you're not planning any windows resize, etc.
Our post processing pass may be exactly what you are looking for: https://github.com/scp-fs2open/fs2open.github.com/blob/master/code/graphics/opengl/gropenglpostprocessing.cpp#L427

That, and the code above that line, draws the scene texture to the backbuffer. You could try using that to duplicate the scene for both eyes. I don't know enough about how that stereoscopic reprojection shader works to say anything more about this though.

 

Offline ksotar

  • 26
Re: It's a good time for VR, since one can have HMD for $20+smartphone. I'll show.
I made a wrapper function and insert it to game_frame() just before game_flip...
To try and test some buffers techniques, it goes like this for now:

Code: [Select]
void gr_opengl_stretch_buffer()
{
//glDrawBuffer(GL_BACK);
glClearColor(0.f, 1.f, 0.f, 0.5f);
//glClear(GL_COLOR_BUFFER_BIT);
//Render quad
GL_state.PopFramebufferState();
GL_state.Texture.Enable(0, GL_TEXTURE_2D, Scene_ldr_texture);
GL_state.Texture.Enable(1, GL_TEXTURE_2D, Scene_depth_texture);
opengl_draw_textured_quad(-0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f);
//glBindFramebuffer(GL_FRAMEBUFFER, 1);
//glBlitFramebuffer(0, 0, gr_screen.max_w, gr_screen.max_h, 0, 0, gr_screen.max_w*2, gr_screen.max_h, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//glDrawBuffer(GL_COLOR_ATTACHMENT0);
}


glClear shows that Framebuffer is of the full window size - that's good. But opengl_draw_textured_quad doesn't seem to do anything in this case.

I'll get to the second eye image later, there are many problems ahead.


I was able to run with RenderDoc - expectedly I was to disable Reshade, since two OpenGL hooks at once are to much. But I could do only one manual capture, after second one FSO crashes, alas.