Welcome to htis breakdomwn of our texture creation process.
In this quide we will be going over our workflow for creating a fully tileable photoscanned suface. While we won't be covering any single topic in great detail, our aim is to provide a comprehensive document that can be used as a guide by anyone who wants to create their own photogrammetry-based textures.
What is a "photoscanned" texture ?
Photogrammetry or photoscanning refers to the p^rocess of taking multiple overlapping photos of an object or surface, and then using special photogrammetry software like Reality Capture to creat a hightly accurate 3D version opof it by cross-referencing points in different photos and using n nmaths to figure out where everything sits in 3d space.
How are photoscanned textures different from other kings of textures ?
Since photogrammetry is capable of building such accurate 3d versions of real surfaces, this generally makes it much more realistic than other methods like manual painting, procedural generation, or estimating displacement/normal information based on any single input texture. Once we have a scan of a surface, it becomes a simple matter to extract all the necessary real-world information from it like displacement, normal, color, ambient-occlusion, etx.
Step 1 : Phoyography
There are already many recources available that demonstrate different techniques to photoscanning surfaces for the purpose of texture creation, so rather than describe it in detail here, we simply recommend these videos :
- Photogrammetry workflow for surface scanning with the monopod-Gravel PBR Material
- Outdoor Photogrammetry Surface Scanning for Materials with a Flash
- https://80.lv/articles/photogrammetry-almanac-environment-pbr-texture-creation
In a nutshell, you want to capture around 300 image per 2 metre square aerea, with as equal spacing between shots as possible.
It helps to place a pair of measuring tapes on each axis of the area you want to scan, which serves both as a spacing aid while shooting, and a reference later on for calibrating the scale of the final material.
Color chart shot |
Finally, we recommend shooting number of additional reference/contextual photos, especially one that highlights the specular properties of your surface, to help you build an accurate material later on.
Step 2 : Adjust/sort input photos in RawTherapee or Darktable
Before starting any new scan project, we need to process our raw photos in a free application like RawTherapee or Darktable. We do not recommend Adobe Lightroom is it's difficult to make photos linear which is important for color accuracy.
While most photogrammetry software can import raw files directly, we want to do certain batch processes on all the photos simulataneously before exporting them, like brightness adjusttments, white--balnce fixes, vignette removal, and chromatic aberration removal.
Keep in mind we don't want to do any adjustments that will confuse the photogrammetry software. Generally, this means we have to stay away from filters like sharpen, denoise, and so on, since those will treat individual pixel clusters differently on each photo, and we ideally need to stick to filters and adjustments that change all pixels universally across all the images (so things like brightness, contrast, and color plance are fine, vignette removal can be a little more tricky but usually also fine). After doing these adjustments we will export the Raws as 166-bit TIFF files.
Note: It is highly recommended to export 16-bit files like TIFF. They are incredibly large and will take up a lot of space on your hard drive, but you can delete them after you are done with the project, since you will still have your Raws. While you can use other formats, we have found that thinks like PNG are far too slow to load/save, and formats like EXR, while small and fast, will create problems when exporting the texture bakes from Reality Capture later in the process. Never export your photos in 2bit regardless of format, as this will effectively prevent you from doing any color adjustments to your textures later on, which is handy for making the roughness map.
The RawTherapee process :
- To import a new batch of photos to RawTherapee, click the File Browser tab at the top left of the UI, and navigate to the folder containing all your photos. It's good idea to start with the color-balance adjustments, so double-click on your color chart shot to open it up.
- With the color chart opened, we can set the white balance by clicking on the "Color" tab on the right, (shortcut : ALT+C) and use the white-balance picker to sample one of the grey tones on the bottom of the color chart.The software will use this ggrey information to automatically figure out an accurate, "neutral" white balance.
Importing images into RawTherap |
RawTherapee Whitebalance |
Note: Most operations can be coppied to all the other images using the CTRL+C/CTRL+V shortcutes in this manner. Or we can paste partial operations using CTRL+SHIFT+V.
4. Each batch of photos will need different treatment. For example some batches will need brightness and contrast adjustments (found in the exposure tab, shortcul : ALT+E) or we may need to crop certain images that have undesired elements by going to the Transform tab (Shortcut : ALT+T).
Note : Crop operations can also be copied to multiple images. This is especially useful for cropping out shoes or tripod-legs if they appear in multiple photos, since these will most likely be in more or less the same areas of the image. As a rule, it's usually better to crop out undesired elements (if possible) than to delete entire photos, since we need every bit of pixel information we can get in order to build a great scan.
5. Once we've done our adjustments and crospped out any undersired elements, we should go through the entire batch of photos and delete any blurry or noise images that we see. This can be very time consuming on large datasets, and it's sometimes not feasible to look at every single image in detail. Still, it's a good idea to at least just scroll through briefly and investigate any photos that look a bit weird, even if we're just looking at thumbnails mostly.
Deleting undesired photos |
6. Finally, we need to export our adjusted photos. To do this, we shift-select all the images we want to export and right click, selecting Put to Queue:
Put to Queue |
Then we select the queue tab on the very left of the screen to view all the images. Here we can set our save location, export format and bit-depth. Again, it's highly recommended to use something like uncompressed 16bit Tiff files :
Exporting the images |
RawTherapee Dos and Don'ts (summary):
DO:
- Set the correct white balance on the entire photo set, using the provided color chart. Like with many of the adjustments we do to photosets, this should be done as a batch process.
- Crop out any anomalies on individual photos, of thinks that we don't want in the scan. This especially includes anything that moves, (e.g. leaves, rocks or grass that obviously moved during the shoot). Scanning software will be confused by moving objects, since it needs static shapes to cross reference between imagesin order to build a reliable point cloud.
- If necessary set the exposure/contrast. Usually it's best(t to keep this subtle, and do it as a batch process as well if we can. Normally of the light chances during a shoot (as it often does), it can be ignored unless it's a really harsh shift. It seems Reality Capture is generally better at recognizing shapes and not reling too much on value information.
- Reduce any vignette effects. (Heavy vignetting is sometimes a result of using a polarised flash setup in a dimly lit environment.) It's best to remove the vighnette since if we leave it in the resulting tecture may be slightly darker than it should be.
- Any kind of cloning, painting or general process that alters photos on a per-pixel level. This also mostly includes things like noise removal and sharpening filters. It may still be possible to do those things in limited cases, but altering individual photos in this way will result in issues in the build step. It's often batter to just delete a blurred photo entirely, than trying to rescue it.
The next step is to load our images into our photogrammetry software opof choice. We use reality Capture which is most often recommended for photoscanning in general, however note that it is paid software that charges per unput. It is particularly good for surface texture scanning. Other alternatives are 3DF Zephyr and Metashape, which are also commonly used for this purpose.
A typical scan of 300 images with a 24MP camera, the more expensive the scan and the more intense the hardware requirements, howver this also results in higher quality outputs.
once the inputs are paid for in Reality Capture, you're free to re-process the scan and export any data as much as you like. You also only pay at the end when you're ready to export, leaving you free to play around and export any data as much as you like. You also only pay at the end when you're ready to export, leaving you free to play around and experiment before needing to make any purchases.
Meshroom is a decent free alternative, but we generally don't recommend it becouse it's slow and requires a lot of fiddling to get decent results.
- Simply drag the entire folder of your exported TIFFs into RealityCapture.
Drag the dataset into Reality Capture - Reality Capture will give us a little popup asking whether we'd like to convert the images to 8bit on import, or whether we want to use the original 16bit ones.
Once we have the images imported, we can go to the alignment tab at the top of the screen and click "Align Images". This can take a few minutes, depending on the amount and resolution of the images.
Align images |
We can also check the pointcloud in the 3Ds view to get an idea of how accurate the build was. Here we can see the cameras ( at the top) look pretty consistent, and the point cloud at the bottom lookd flat and consistent as expected. We can also drag/rotate the bounding box in the 3d view to crop the scan down to the area we want. (Cropping down the scan will save time during the next steps) Use the colored dots to scale the bounding box, and use the widget in the center to rotate and move it.
3Ds view |
Optional : If neccessary we can also build a low-detail preview version of the scan to get an even clearer idea of what it will look like :
3. Once we're happy with the pointcloud we can go ahead and build the high-detail scan. This can take a lot of time (several hours) depending on the speed of the PC we're working on, and the amount/resolution of input photo provided. It is now building a super dense mesh with potentially hundreds of millions of millions of triangles.
High Detail |
Reconstruction |
Renaming the scan |
Creating different meshes for baking : We will be using the initial high-detail mesh to bake the height and normal maps, but we'll create a separate, lower detail version of the mesh for the color bake using RealityCapture's simplify tools. (At 75m polygons the main mesh will be a pain to unwrap and texture-project, and it can also couse memory issues when baking)
4. Now we can create the simplified version of the mesh for texture projection. With the high-res mesh selected, go to Tools >Simplify Tool, and in the tool settings we can set the Type to Absolute, and the target triangle count to somewhere around 10-20 million (depending on the complexity of the original), and then click Simplify:
The simplify process will take a short while, when it's done we can also select and rename it. We can also add "_TEXTURED" to the name, so that we don't get confused later on when baking.
Simplifying the mesh |
Renaming the simplified mesh |
When it's done we can expand the model in the 1Ds view, and becouse we selected fixed texel size, it has created as many UV maps as needed (in this case 4 UV maps) to ensure 100% quality when projecting the texture onto the scan.
Note: Of course we will lose some quality when we bake the texture back down to a single plane, since it will be taking all the texture information from 4 UV maps and squeezing it all onto a single final map, but it's usually best to project at the highest possible quality before baking regardless.Now we need to set the texture downscale factor. To do this, click on the model and go to Mesh Model>Mesh color & Texture > Settings and make sur Downscale Images Before Texturing is set to 1 to ensure maximum quality.
Setting downscaling factor |
Texture the mesh |
The texture projection will take a while again, and when it's done we should have the color information projected onto the scan like this :
The textured scan |
We need to export the scan to Blender so that we can align the bake plane to it.
Exporting the high detail mesh wil be bery time consuming and Blender probably won't even open it on most PCs. Therefore We can either export the 12m_TEXTURED mesh or we can create an even lower detail version. To do this we can just repeat the simplify step but make it something like 1-5 million triagles :
8. We'll select the mesh we want to export in the 1Ds view, go to Tools > Exsport > Mesh and PointCloud,
and in the export dialogue box set the location and filetype. In this case we'll use OBJ as the filetype:
And click Save.
Note: At thispoint RealityCapture will ask us to pay up before we can export. We'll select our payment method and save out the file.
Creating a low-detail proxy mesh |
Export the proxy mesh |
And click Save.
Note: At thispoint RealityCapture will ask us to pay up before we can export. We'll select our payment method and save out the file.
9. Now we'll get another dialog box to set a bunch of stuff before export. Use the settings shown in the screenshot below. these are important, since they will determine the orientation of our bake plane once we import ity back in from Blender :
Proxy mesh export settings |
Step 4 : Baking in RealityCapture
Overview
"Baking" is the process of transferring all the details of the scan onto individual texture maps (images), in order to make it easy for the end user to use these images and apply them to different objects and surfaces in a 3D scene.
There are many different types of maps, but usually when we bake we create only 3 important texture maps that all the other maps can be derived from.
these 3 maps are the Color(also called "Diffuse" or "Albedo"), Displacement and Normal maps.
Before we can bake the scan data onto texture maps, we first need to create a bake-plane in Blender. This plane will define the area of the scan that we want to transfer onto texture maps.
remember that the scan itself is in a very "messy" state. There are sometimes lots of glitches and artefacts on the edges and corners of the scan geometry, and we will want to aboid these as far as possible, so we'll use the plane to find the best "clean" area on the scan.
Creating a bake-plane:
- In Blender, import the proxy version of the scan into Blander : File > Import > Wavefront(.obj)
- We'll use this proxy version of the scan to align the plane to the area we want to bake. It's best to use the largest clean available area on the scan, and throughout the process we'll try to keep the plane square, (i.e. don't scale it on the X or Y axis). To add a plane, press SHIFT+A and click "Mesh > Plane"
- We'll position the plane so that it hovers just above the scan. We'll try to keep the plane an even distance above the average surface of the scan. It doesn't matter if parts of the scan poke through the plane, as long as it's hovering an average heught of a few cm or mm above the surface.
- Alignig the plane in Blender
Aligning the plane in Blender
We can also add a few manual subdivisions to the plane (TAB into edit mode, select all verts and right-click >subdivide) and then slightly wrap the plane using the proportional editing tool in order to make it fit the suface more snugly.
Adding subdivisions to the plane where necessary. |
Adding subdivisions to the plane |
- Select the plane and press CTRL+A,
- then select "All Transforms" from the list. It's important that the plane transforms are applied, even if the plane geometry itself is wherever/upside down etc.
- Make sur the plane ,Normals are facing "up" in relation to the scan surface, by clicking the "Viewport Overlays" button at the top of the 3D view, and then selecting "Face Orientation". Blue means the Geometry Normals are all pointing the correct direction, Red means they are pointing the wrong direction :
- Above can see the plane normal direction is correct, since it shows up as blue.
Plane Normal direction is wrong - Alternatively, if we have this situation in theabove image where the plane shows up as red, we need to fix it by doing the following : Select the plane and press TAB to go into edit mode, Then press TAB to go into edit mode, then press A until all the vertices are selected, and finally press ALT+N to flip the normal direction. New press TAB again to exit edit mode.
- We should also make sure the plane normals are set to smooth. To do this, select the plane, right-click and select Shade Smooth :
- When all the above is done, select the plane, hit File >Export >as obj, and make sur "selection only" is enabled at the top right :
Blender obj export settings
Now we can start setting up the bake in RealityCapture :
- Before we import the bake plane, we need too set our default bake resolution. Go to Mesh Model > Settings and set the Imported-model default texture resolution. In this case we'll use 8192x8192
2. New we import the model by going to Mesh > Import ModelSetting the default bake resolution
We should also rename our imported plane to something that makes sense like "Bake Plane"
3. When we are ready to bake, go to Tools > Texture reprojection
4.We'll bake the Normal and Displacement maps first, so in the Reprojection settings, set the source model to the highest detail geometry (in this case the 75m triangle version), and the result model to the bake plane :
Setting the source model for baking |
and of course make sure Displacement reprojection and Normal Reprojection are both enabled.
Enable Displacement and Normal Reprojection |
5. Once that's done we can bake the color, we'll change the Source model to the textured version, disable Normal and Displacement, enable Color reprojection, and make sure the source layer is set to 16-bit Color layer :And then click Reproject again as shown above.
Exporting the baked textures from Reality Capture :
- Once our bakes are done we can export them as texture maps. We'll select the bake plane, go to "Mesh Model" to export.
Exporting the textures - Well set the type to "Just Textures" and click Save
Note: Reality Capture should not ask us for payment again, since of course we have alreadi paid for this scan when we exported the proxy mesh earlier.Set the type to Just textures - In the export options, we'll select the texture maps and formats we want to save out, then click OK. (preferably TIFF for the normals and 16-bit color, and EXR for the Displacement)
Note: the "Color Layer" in the above export window refers to the Vertex Colors, which we don't need. The correct color channel here is the 16-bit Color Layer as shown.Texture export settings - New we just click "ok" and that's it ! We've saved out all our maps, and we can now continue to Unity ArtEngine to convert them into a seamless PBR texture set.
Step 5 : Tiling in Unity ArtEngine
ArtEngine is a node-based application used for authoring high quality seamless PBR texture sets. It makes use of AI-based tools as well as conventional methods to modify and "mutate" existing textures. It has many different nodes and features that are useful on all kinds of material types, but we will only be looking at the basics in this guide.
Note : As of July 2022, ArtEngine is no longer actively maintained, but the standard version is still offered as a kind of perpetual free trial that works on windows.Unity has stated that much of the functionality and AI-based tools will be available as part of their cloud-based toolset in the future.
To get started installing the free version, click here. (You will need a Unity ID account to use it)
Documentation and further help on getting started are available.
ArtEngine Interface Overview
ArtEngine UI |
- Firstly we need to import our RealityCapture bakes into ArtEngine. Simply click and drag them into the node editor here:
Dragging textures into ArtEngine |
2. We can see from the tiling preview on this particular map that it has a gradient, which is cousing an obvious repetition when we zoom out. We can add a gradient removal node to help remedy this. In the nodes panel on the left, search for Gradient Removal, and drag it into the node graph :
Adding a Gradient Removal node |
connecting the nodes |
Previewing the gradient removal |
3. We can double click the displacement and normal maps to view their previews. The Normal map was saved from Reality Capture as a 16bit PNG, so it should chow up fine, however the displacement map just shows black :
this is becouse it was saved as a 32bit EXR file. The displacement data is still there, but it's showing black becouse the range is ourside of what the image viewer or our screen can display. While it's certainly possible to work with 32bit files in ArtEngine, it's a bit overkill for us, and we'll convert it to 16bit by putting it through an Auto-Levels node. This will also make it possible for us to preview it while working:
this is becouse it was saved as a 32bit EXR file. The displacement data is still there, but it's showing black becouse the range is ourside of what the image viewer or our screen can display. While it's certainly possible to work with 32bit files in ArtEngine, it's a bit overkill for us, and we'll convert it to 16bit by putting it through an Auto-Levels node. This will also make it possible for us to preview it while working:
Auto levels |
The Displacement also shows a pretty harsh gradient. We can reduce it the same way we did with the color map, by adding a gradient removal node. Remember to keep this one subtle as well.
4. We've adjusted the maps a bit, let's go ahead and use the displacement information to generate an Ambient Occlusion map :
Adding an AO node |
Preview the AO node |
Adjusting the AO strength |
Adding a Roughness generation node |
5.Now that we've generated all the necessary maps, we can combine them into a material using the Compose Material node. We'll route the information from the various stages in the node-chain to their appropriate slots. For example, the color gradient removal goes into the Albedo input, ("Albedo" is another term for "Color") and of course Roughness goes to Roughness :
Connecting the Albeedo and Roughness nodes |
Connecting the Displacement |
Connecting the AO and Normal nodes |
Removing the metalness input |
Adding the Seam Removal node |
Seam removal node |
With the Seam Removal node selected, we can click the Execute button to start the seam removal process :
Once done, we can double click to preview it, and we can again hit TAB în the 2d view to toggle the tiling preview :
We can also preview the different maps in the material by clicking the little menu in the top right of the 2d view :
If done correctly, these should be no visible seams on any of the maps.
8. our maps are now ready to export. We can add an Output node and then set our export parameters on the right. It's recommended to always use a 16 bit lossless format such as PNG.
Excuo Selected Nodete t |
Previewing the texture tiling |
Previewing the other maps |
7. The seam remocal worked pretty well, but we can still adjust any areas that look a bit off. Let's zoom in a bit and take a look at this slightly awkward fading shape here.
While there are several ways we can tackle this, for now we'll go for a simple clone operation. Let's connect a Clone Stamp node to our node-tree.Adding a Clone Stamp node |
We'll double-click on the clone stamp node to view its properties on the right, where we can set things like the size, hardness and opacity of our clone brish.
Once we've set our brush parameters, we can ALT+click in the 2d view to select a clone source, then just click and drag over any area to start cloning :
We'll just copy some of the concrete over that awkward shape to hide it. We can ALT+click at any time to reset our clone source, and use CTRL+Z to undo any action.
And so we can continue using clone brush to fix any weird areas.
Clone brush settings |
We'll just copy some of the concrete over that awkward shape to hide it. We can ALT+click at any time to reset our clone source, and use CTRL+Z to undo any action.
And so we can continue using clone brush to fix any weird areas.
Note: The clone-stap brush is just one of many methods at our disposal. We can also use nodes like the Content-Aware Fill node fix up any areas that need work.
Highlighting an area using Content-Aware Fill |
Content-Aware Fill does it's thing |
Export setting. PNG 16bit |
We finally have a fully tiling texture set ! We were able to get a really nice result from just 3 maps and a relatively small node-tree.
Final maps export |
Step 6 : Material setup
Finally, we're ready to assenble the texture maps into a material in Blender. We'll set it up with Adaptive Subdivision for that nice crispy displacement detail.
- First we'll open blender, delete the default cube and light, and add a Sphere. We'll also add a material to the sphere and enable Displacement Only in the material settings.
Setting the material to use displacement only - We'll enable the "Eperimental" Feature Set in the render setting, and we can also enable GPU Computer if we have a Graphics card installed in our PC (GPU rendering is generally much faster than CPU)
Setting Cycles feature set to Experimental - We can add a Subdivision Surface modifier to the sphare, and enable Adaptive Subdivision ( this only works if Experimental is turned on).
Enable adaptive Subdivision - Now we'll go to the Shading tab and select our sphere to start setting up the material in the Node Editor. We can simply drag our texture maps into the node editor one by one. (We don't need the AO map since it is only used in real-time engines like Eevee, or in game engines, or as a mask).
Drag the textires into the shader editor - We'll connect our Diffuse (color) and Roughness maps to their respective sockets on the Principaled BSDF node. remenber to set the Roughness map color space to non-color, since it's an information map and doesn't contain any color information.
- We'll need to press Shift+A and add a Vector > Normal Map node after the normal texture map, and then connect it to the Normal socket on the main node. Again, remember to set the Color Space to Non-color or it will not render correctly.
Adding a Vector Normal Map node - Now We'll hit Shift+A to add a Vector>Displacement node, and plug our displacement map into its Height slot. Then plug the output into the Displacement map into its Height slot. Then plug the output into the Displacement socket on the Material Output. Remenmber to set the color space to non-color on the Displacement as well.
Adding a Dispalcement node - Since we're going to use generated texture coordinates for this preview, we'll need to set the method on each texture map node from "flat" to "box"
Setting the projection method to "Box" - Now we can hit Shift+A and add a Input>Texture Coordinate node and a Vector>Mapping node, and connext them to each input texture as shown below:
The Full material node setup - Before we render we'll also need to add a HDRI from for our lighting in the world properties (Lots of great free HDRIs available on polyhaven.com):
Adding an Environtment texture to the World shader Choose a HDRI - We can try a test render by enabling viewport shading in the top right corner of the 3d view :
We see that our material is working, but it's very spiky. this is becouse our displacement node is set way too strong. Let's decrease it a bit by tweaking the scale value :Displacement too strong Decrease the displacement scale value - Now we can do our final render by hitting F12 or by going to Render>Render Image
And we're done ! We've created a fully tiling photoscanned texture set.Final material render