Just because my life seems to be full of random and largely unexplained occurrences I had to sell my soul.
We got an Oculus Quest 2 VR headset and if you didn't know Oculus is owned by Facebook. This latest Quest model has a minimum requirement of having a working Facebook account, so I had to make one :(
First impressions are mixed. The resolution is higher than the HTC Vive, but not a lot of games seem to really show this. The battery life is appalling, 1:30 play time then 2:30 charge time. The controllers are too small for my hands and the screen refresh rate is noticeably slow. But there is no setting up lighthouses and games are very quick to load.
I need to explore ways of extending the play time, especially as I should be able to play all the HTC Vive Steam VR apps at the higher resolution. A quick Google shows that this is not a straight forward task :(
In other news, my weekend was spent playing with a Doorbell.
We need to talk about Beat Saber. Even if you have not played it you probably know that it is a VR rhythm game.
The kids bought a while ago and I thought I would never play it, as I hate rhythm games. I am not very good at them, but Beat Saber is not a rhythm game in the sense you have to do something in time to the music, though you do, sort of.
You could happily play this game without any sound and just have fun hitting the blocks. I love the pick-up-and-play experience of this game. Whereas Halflife:Alyx needs a good few minutes to get into a game, Beat Saber can have you in a song in a very short time.
It does help that I like the EDM music in the game and it's strong beats work really well. I often play just to hear the music, not worrying if I do well. As with all music based games, the biggest issue is getting bored with the library. Beat Saber comes with a good selection of songs and more can be purchased, but look out for deals, as buying individual songs can get expensive quick.
Beat Saber is an amazing experience when you first try it and I truly mean "amazing". The setting, the lights, the reactions, the ease of just having fun. But it also has a downside. You will love the first day of game play and think you will play this everyday for the rest of your life, but it does grow boring and this is accelerated by certain children repeatedly getting substantially better scores than you.
So should you buy Beat Saber if you have VR? I think this is a must have VR game and it is great for parties and showing to non-VR people, just do not expect it to change your life.
If you are playing Beat Saber and hitting most if not all the blocks but not getting a high score, read the help [?] on the Beat Saber home page, you are playing it wrong. I went from consistent rankings of E-D on some of the easiest levels to A-S on Normal. "S" is higher than A and I find the songs impossible after Normal mode.
A nice long video getting Blender to create Python code that creates 32 images from four camera angles and eight materials(matcaps).
Goes into detail about using Imagemagick to create montages and then notes running Blender from the command line and automated scripts.
When I run these scripts on my machine it creates 1728 separate renders for 54 frames of video in just under 3 minutes (including building the video)
YouTube limited the description, so here is the complete script
In this video I will take you through the steps to create your own Blender automation script.
You can then use it to generate any number of variations and build a time-lapse style video from your save files.
Surprisingly, you do not need to know Python to create a Python script in Blender, Blender will do most of the work for you.
Lets start by setting up the Blender interface.
#Go to the Edit/Preferences and Interface and tick the "Python tooltips"
#Now when we hover over settings and buttons the tooltip will contain the Python references
Pull open a second window and set it to "Text" and open a third window and set it to "Info"
enter the single line
--import bpy
this tells the Python interpreter that we want to work with Blender
Save it as a .py file
and lets start by creating a skeleton program
we need a method to
set up the scene
set up the camera
set up the materials and lighting
and one to do the actual render
--def set_scene():
I want to change the render engine, set the world colour to black, make the render shape square and the render size 25%
I do not know the magic commands to do any of that so I am going to let Blender write them for me.
Go to the Render Properties and select Workbench as the Render Engine.
Workbench is an extremely quick renderer and comes with lots of default mat caps for materials and lighting.
You will see that in the Info window Blender has recorded the command to change to Workbench
Left click on the command to select it and the right click and select copy
Paste into your script
--bpy.context.scene.render.engine = 'BLENDER_WORKBENCH'
Add a line to run your new method
--set_scene()
You have now created a working Python script that controls Blender
Save it and change the Render Engine to EEVEE
Now run it and you can see it has switched the engine to Workbench!
Lets quickly do the same for the other properties we want to control
World Properties / Viewport Display Colour
Set to Black and copy and paste the code
--bpy.context.scene.world.color = (0, 0, 0)
Output Properties and set the Resolution to 1080 by 1080
and copy both lines (shift select)
and make sure you are indenting with tabs, Python is very particular to indenting
--bpy.context.scene.render.resolution_x = 1080
--bpy.context.scene.render.resolution_y = 1080
the same for resolution percentage
--bpy.context.scene.render.resolution_percentage = 25
there is no point rendering high detail when we want lots of small images
save your Python script
Now we want to set up our camera
add the new method
--def set_camera():
Now move your view around and from the View menu select Align View and select "Alight Active Camera to View".
Open up the item properties panel and change the X by one then back again.
Copy the command from the Info window
Repeat for Y and Z
and then again for Rotation X, Y, Z
add a call to set_camera
save your script
mess up your camera and run the script to see it all jump back how you had it
then set up the Mat Cap, Material Captures
--set_matcap():
Render Properties / Lighting and Matcap and then click on the shaded Sphere
Select something fun, like the UV lighting
and paste the two commands into your method
--bpy.context.scene.shading.light = 'MATCAP'
--bpy.context.scene.shading.studio_light = 'check_normal+y.exr'
add a sneaky change so those read
--bpy.context.scene.display.shading.light = 'MATCAP'
--bpy.context.scene.display.shading.studio_light = 'check_normal+y.exr'
add a call to set_matcap
save your work and this time change the matcap and run it
Now we just want to be able to render out our mesh as an image file
add the method
--def do_render():
This time Blender will not write the code for us, but I have it here for you
--bpy.ops.render.render(write_still = True)
and we need to set the output file name, "x"
add a call to do_render
save and run it
you now have a file in your temp folder called x.png that is the render we set up
Awesome! you can now load up any Blender file and your saved Python script and run it to get the same render settings and output.
Now we are going to kick it into overdrive and actually do some Python coding without Blender's help, but do not worry I will explain as I go along and you can download the final script from the links in description.
We want multiple camera angles so we need to add those to our program
I am going to create four and put them into an array of Python dictionaries
start with the name
--cam_setups = [
square brackets denotes an array
--{'camera_name':'cam1', 'location': [-7.24919, -6.81488, 3.98134], 'rotation': [1.10932, 0, -0.827971] } ,
FFW
get the camera location and rotation attributes and repeat
FFW
FFW
FFW
close off the array with another square bracket
and now we get Python to go through each array item and pass it to the set_camera method
--for cam in cam_setups:
----set_camera(cam)
and then in the set_camera method we accept the cam parameter as setup
and swap out the hard coded values for the ones in the dictionary object
--bpy.data.objects['Camera'].location[0] = setup['location'][0]
so we are getting the 'location' part of the object and then the first item in the array, which is the X location
repeat for the other settings
FFW
FFW
...
now, if we run this script it will do what we want and set the camera four times and render fours times, but the file name will be the same, so you will only have the last render.
We can fix than by passing the camera_name to the do_render method
--def do_render(filename):
--bpy.context.scene.render.filepath = "/tmp/{0}".format(filename)
that 0 in curly brackets will be replaced by the filename
--do_render(cam['camera_name'])
now run it and there will be four different rendered files in your temp folder named
cam1.png, cam2.png, cam3.png and cam4.png
Now we know the basics lets do the same with the mat caps
--mat_caps = ['basic_2', 'jade', 'metal_carpaint', 'ceramic_lightbulb', 'check_normal+y', 'check_rim_dark', 'resin', 'toon']
I chose eight of the mat cap I liked and it is just a basic array
indent correctly for Python so that each time we change cameras we change mat caps eight times
update the file name to include the mat cap name
--bpy.context.scene.render.filepath = "/tmp/{0}_{1}".format(filename,mat)
save and run we have 32 files named
cam1_basic_2.png
cam1_jade.png
...
etc
Now we do not want to have to open up each Blender file and then find the script file and then run it every time, in fact we do not want to open up Blender at all, we just want the rendered images
You can run Blender from the command line, here I am using Linux terminal, but the premise is the same for Windows and Mac.
and Blender will happily load mymesh.blend and then run your python script and render 32 files and you will not see Blender load, just the messages as it runs.
The next steps are Linux specific, but should be reproducible in Windows and Mac
To combine all 32 images into a single image I ran Montage from Imagemagick
montage "/tmp/cam1*.png" -tile 8x1 -geometry +0+0 "/tmp/line1_montage.png"
takes all the eight cam1 images and creates a new long strip image
repeat that for all four cameras and we end up with four long strips
montage "/tmp/line1_montage.png" "/tmp/line2_montage.png" "/tmp/line3_montage.png" "/tmp/line4_montage.png" -tile 1x4 -geometry +0+0 "/tmp/montage.png"
creates a new single image with the four strips combined
I then wrote a couple of shell scripts to automate running those commands and used ffmpeg to build a video file
All the script files are in the github link in the description below including a link to my free Human Skull mesh.
Disclaimer:
This page is by me for me, if you are not me then please be aware of the following
I am not responsible for anything that works or does not work including files and pages made available at www.jumpstation.co.uk
I am also not responsible for any information(or what you or others do with it) available at www.jumpstation.co.uk
In fact I'm not responsible for anything ever, so there!