• Home
  • Blog
  • Building a Portable PBR Texture Scanner

Building a Portable PBR Texture Scanner

Building a Portable PBR Texture Scanner

Portable PBR Texture Scanner

I started this PBR texture scanner project to be able to feed a PBR engine with properly authored texture data. And because it just sounded like a fun project to work on 🙂 While talking with artists I found many struggle to get the PBR texture maps right since authoring those maps can be quite unintuitive. With this project my goal is to help artists to acquire proper PBR content for authoring the textures and also to help them to develop better intuition how the textures should be authored to model various surfaces.

The idea for the scanner is to have a portable DSLR camera attachment that illuminates a surface with multiple lights and takes multiple images of that surface under those different lighting conditions. These images can then be used to extract proper PBR textures for the surface.

Ready at Dawn released a paper at SigGRAPH 2013 detailing their textile scanner that’s able to capture albedo and normal maps from a piece of textile placed in the scanner by taking 10 images of the textile illuminated from different directions. My plan is to build this same device but in a portable setup so that I can scan surfaces anywhere and anytime. In addition I would also like to capture roughness and other maps needed to fully model reflectance of various materials.

First Prototype

PBR Texture Scanner - 1st Prototype PBR Texture Scanner - 1st Prototype

I built my first prototype in just one weekend. It’s made out of black cardboard, a microcontroller and 12 super bright LED’s. The microcontroller is connected to the remote shutter release port of my camera to synchronize the LED’s with the image capturing.

The main issue with this proto was that the super bright LED’s are not very bight for this purpose, and I had to compensate this with longer exposure time. It also had to be connected to a PC via USB cable for power and it didn’t have DSLR camera mount so it wasn’t really portable. However, this was a simple way to get my feet wet in building this kind of attachment and to gain some preliminary experience to start building more comprehensive prototype.

Second Prototype

For the 2nd prototype I wanted to use brighter LED’s and have more portable setup with proper mount for a DSLR camera. I also wanted to take advantage of cross polarization photography to separate diffuse and specular reflectances. Diffuse-only image is important for calculating normal maps and diffuse albedo textures, and specular-only image can potentially be used for extracting roughness maps for PBR engine. I was also striving for modular design that would enable me to easily improve components in the prototype.

Building the Frame

For the frame of the attachment I was looking for existing solutions, and the closest match I found was a softbox that’s used for flood lighting in movie and photography industries. These softboxes didn’t fulfill all the special requirements for the attachment though (easily collapsible, certain size, minimize internal light reflections, block external light, etc.), so instead I decided to build a custom frame myself.

I started the frame building by making the 4 “legs” for the attachment where I would attach the LED’s. I sawed the legs off from a lath of wood and then screwed small hinges to the legs, because I wanted to be able to easily collapse the frame to a smaller space. Then I painted the legs with black matte spray paint to reduce internal reflections from lights during the capture process.

PBR Texture Scanner - Frame Building PBR Texture Scanner - Frame Building

For the camera mount I bought a metal corner brace, sawed a piece off for better fit and drilled few holes to it for attaching the camera to the brace and for attaching the brace to the frame. The lesson I learned from this was to do as little metal work as possible! It’s hell of a lot of work to process metal particularly if you don’t have a bench vice and a bench to attach it to. So in hindsight it would have probably been better to look for a plastic corner brace instead.

PBR Texture Scanner - Frame Building PBR Texture Scanner - Frame Building

Another lesson I learned throughout the project was that it’s not all that easy to find components for hardware projects. I spent quite a lot of time in hardware stores looking for specific components that could fit the project and I had to adjust the design based on what kind of parts were available to me. If you got a 3D printer it could come in handy for fabricating these hard to find custom pieces.

Next I sprayed the corner brace black and hot glued a piece of cork sheet to the brace to protect the camera from scratching. Next lesson: hot glue is really hot and shouldn’t be touched with fingers straight out of the gun! 😛

Then I screwed the brace to a floral craft ring which had big enough hole for my camera lens to fit through and sprayed the ring black as well. Finally I sprayed another floral craft ring black and screwed the 4 legs from the hinges to it to complete the attachment frame as shown below.

PBR Texture Scanner - Frame PBR Texture Scanner - Frame PBR Texture Scanner - Frame

Making the HPL Lamps

For the 2nd proto I wanted to use brighter LED’s and I bought 3 Watt 700mA High Power LED‘s from eBay for the purpose. I ended up using 6000-6500K HPL’s to get lighting close to CIE Standard Illuminant D65, which requires less color correction for extracting calibrated albedo textures. For cross polarization switching I also wanted to experiment with the idea of switching light polarization at the light instead at the camera lens, so I built these 2×2 HPL lamps.

In each HPL lamp two HPL’s are connected in series just to provide more light to capturing area and both pairs of HPL’s in a lamp can be switched on/off independently to implement the light polarization switching. There are total of 12 of these HPL lamps used in the attachment thus I used 48 HPL’s for all the lamps total. Note that I don’t use heat sinks for the HPL’s since they illuminate the surface only very briefly thus the heating isn’t an issue.

PBR Texture Scanner - Building HPL Lamps PBR Texture Scanner - Building HPL Lamps

In each lamp the positive leads from the HPL pairs are connected while the negative leads from the pairs have separate wires to control both HPL pairs independently with 3 wires. The HPL pairs are hot glued on top of 3 pin crimp connector housing with male Dupont wire terminal that enables changing the lamps easily if needed. This was a good exercise to learn to use Dupont wire crimper tool.

PBR Texture Scanner - Building HPL Lamps PBR Texture Scanner - Building HPL Lamps

Wiring the Frame

The frame is wired essentially as in LED matrix configuration. Each leg in the frame has 5 wires going in: 3 source (positive) wires and 2 sink (ground) wires. Each of the 3 lamps in a leg has a unique single source wire going in, while all the lamps in a leg share the two sink wires. By connecting one of the 3 source wires to a power supply and one of the two sink wires to the ground switches on one of the HPL pairs at the time. Across the four legs the source wires are further connected so that wires from lamps at the same height level in the legs are connected, and the sink wires are individually connected to an HPL Matrix Driver (explained later). Thus any single HPL pair out of 24 HPL pairs can be switched on at the time with total of 3 source wires + 8 sink wires connected to the driver.

The 3 wires for each lamp in a frame are connected to a crimp connector housing with female Dupont wire terminals, so that the HPL lamps can be easily plugged to the frame. These headers in the frame are also tilted vertically with small wedges so that the HPL lamps point towards the center of the capturing area when the attachment is in the designed extended configuration during the capturing process. This tilting is important to minimize angular light attenuation. The 11 control wires from the legs finally end up to a female IDC header that can be connected to the HPL Matrix Driver that controls the current flowing through the HPL lamps.

PBR Texture Scanner - Wired Frame

HPL Matrix Driver

To control 3W 700mA HPL’s is a bit more complicated than controlling super bright LED’s I used in the first prototype. The HPL’s require much more current than what microcontrollers can source from pins (usually max ~20mA), thus there’s a need for a separate module that handles the current. For the purpose I built an HPL Matrix Driver which is able to source large current to HPL’s with a small current from a microcontroller. Below is an image of the components I used for building this 3×8 HPL Matrix Driver.

PBR Texture Scanner - HPL Matrix Driver PBR Texture Scanner - HPL Matrix Driver PBR Texture Scanner - HPL Matrix Driver

The main components of the driver are 3x TIP127 PNP transistors and 8x TIP122 NPN transistors to implement high and low side switching of the matrix respectively, and which are both rated at 3A. There’s also LM350 voltage regulator paired with a 1.8Ω resistor to provide constant 700mA current to the HPL’s. I’m also using a 12V 1.5A step up boost converter to step up ~3.7V from a battery to handle the large voltage drop from the two HPL’s that are connected in series. The driver has 2x 14 pin male IDC headers to connect to the HPL matrix and a microcontroller.

If you plan to build this kind of HPL Matrix Driver you may want to check the below video for schematic and more details. Note that you can also easily scale the driver up for a bigger matrix and it could be used to control larger HPL array with time-multiplexing.

The MCU Module

For the microcontroller I used Teensy LC. It has plenty of IO pins and more than enough processing power to run the simple logic required by the attachment, and is small enough to fit the space I have for it in the attachment. It can be also powered directly with a 3.7V Lithium Polymer battery that I’m using as a power source for the attachment. The actual MCU module is quite simple consisting mostly of connectors to control other modules as shown below

PBR Texture Scanner - MCU Module PBR Texture Scanner - MCU Module PBR Texture Scanner - MCU Module

The only a bit more tricky part of the MCU module was figuring out how to trigger the DSLR camera, but I found a great page listing the pin layouts for various camera models for DIY projects. For Canon this required only connecting the trigger cable to the ground which I implemented with a transistor.

Control Unit

The control unit of the attachment consists of the MCU module, the HPL Matrix Driver, 2Ah 3.7V Lithium Polymer battery and a USB charger (the red module in the image below). On the control panel I have the capture trigger button which starts the capturing process, remote shutter cable connector for DSLR camera, on/off switch and RGB indicator LED. The frame and control unit are then attached together with 3 screws with aluminium tube encapsulating the screws to keep the rings at given distance from each other.

PBR Texture Scanner - Control Unit PBR Texture Scanner - Control Unit PBR Texture Scanner - Control Unit

Below is a video update of the project that gives a bit closer look to the the control unit, and also gives some more details about the project.

Making Light Filters

For cross polarization photography and to be able to switch the light polarization at the lamp site, I created light filters that I attached in front of the HPL lamps. These light filters have a layer of black foam and I cut slots to the foam for both HPL pairs. Then I cut a piece of glass with the glass cutter which I glued on top of the foam and attached vertical and horizontal linear polarizing film in front of the slots.

The idea here is to focus all light from an HPL pair to flow through the linear polarizing filters in front of the slots to enable the light polarization switch by switching on either of the HPL pairs. Below is an image of the glass cutting equipment I used and the the finished filters hot glued in front of the HPL lamps.

PBR Texture Scanner - Glass Cutting Equipment for HPL Lamp Filters PBR Texture Scanner - HPL Lamp Filter

While there doesn’t appear to be any difference in the slots in the above image these slot let through differently polarized light. This can be demonstrated by placing another polarizing filter on top of the lamps. In the below image the filter on top is rotated by 90 degrees which “closes” either of the two slots in a lamp depending on the filter orientation.

PBR Texture Scanner - HPL Lamp Filter PBR Texture Scanner - HPL Lamp Filter

Frame Skirt

To block external light from reaching the capturing area and to achieve well defined lighting environment, my wife helped me by sewing a skirt around the frame. I settled for black velvet for the skirt to effectively block external light while minimizing internal light reflections from HPL lamps during the capture process. The skirt also defines the extents of the attachment legs so that the HPL lamps in the legs are in specific positions when the attachment is fully extended. These HPL lamp positions are fed to an algorithm for extracting normal maps from the images.

PBR Texture Scanner - Cover PBR Texture Scanner - Cover PBR Texture Scanner - Cover

A lesson I learned with black velvet is that it sheds some lint, so after cutting the pieces it’s good to finish the edges properly to avoid the lint from contaminating the captured surfaces. As an afterthought I used glue to finish the edges while a serger would have been a better choice, but since I had the skirt already stapled to the frame I couldn’t use a serger. Duvetyne, which was my another alternative for the fabric, might not have this lint issue though.

Because the legs of the attachment are connected via hinges to the control unit, the attachment can now be collapsed to a smaller space to make it easier to carry around as shown below.

PBR Texture Scanner - Collapsed

Fast Polarization Modulator

While the polarizarion switching at light site kind of works, it’s not an ideal solution and has some issues. I liked the idea of light site switching since it was quite cheap solution, but it’s really better to do the switch at camera lens instead. If you are interested in details why, you can check the video at the end of this post in the “Results” section which also dives deeper into details about light polarization with some experiments.

Switching polarization at the camera lens can be done with a component called “Fast Polarization Modulator” that’s able to switch the light polarization by 90 degrees when voltage is applied to the modulator. I received 5x5cm FPM from company called LC-Tec and hot glued it to a step up filter ring in order to be able to mount it in front of a camera lens.

PBR Texture Scanner - FPM

FPM consists of a twisted nematic liquid crystal (LC) layer sandwiched between glass plates and a polarizing filter on one side of the FPM. The LC layer twists the light polarization by 90 degrees in the Chiral Nematic rest state. When voltage is applied to the layer it switches to Nematic Phase where light goes through the layer without polarization change. After the LC layer, light passes though the polarizing filter to implement the polarization switching behavior.

It’s important to orient FPM correctly at the camera lens in relation to the linear polarizing filters at HPL lamps for cross polarization. Also it’s preferable to have light horizontally polarized in relation to the surface at the lamp to minimize additional partial light polarization at air-material interface. Because the HPL lamps are placed at the attachment legs which are at 90 degree angles from each others, the FPM can handle cross polarization from all HPL lamps oriented horizontally in relation to the surface.

FPM Controller

Controlling FPM isn’t quite as trivial as you might think, since it should be fed with alternating current (AC) instead of direct current (DC) to avoid ion migration in the LC layer. So I built an FPM controller module using MAX3232 RS-232 transceiver component and added it into the controller unit. The controller is fed with 200Hz PWM signal from the MCU to generate ~5.5V 200Hz AC signal. So when the AC signal is applied the FPM it switches to the nematic state and when the signal goes back to 0V the layer goes back to the chiral nematic phase.

PBR Texture Scanner - FPM Controller PBR Texture Scanner - FPM Controller PBR Texture Scanner - FPM Controller

Results

For testing the scanner I went on a field trip around University of Toronto and scanned various horizontal and vertical surfaces. I was quite positively surprised how well the scanner worked based on the preliminary results, though I still need to write the software to extract the actual PBR textures from the images. Once I have the software working I’ll write another blog post about it. Below is a video update of the project including few clips how I used the scanner on the field trip towards the end of the video.

Below are some examples of the surface captures on that field trip. For vertical surfaces where I had to hold the scanner during the capture process I had to do some position alignment of the images for specular extraction due to slight movement of the camera. There’s also some brownish tint in specular images I’m not sure where it’s coming from and that’s something I need to look into. I also noticed that for cross polarization the FPM lets through bright specular light with some purple tint which I believe is due to the AR-coating in the FPM and something that requires some further investigation.
UPDATE #1: I talked to LC-Tec and they explained that the purple tint is caused by slightly different polarization twist for different wavelenghts that’s characteristic to the FPM chiral phase (there’s no tint in the nematic phase). The twist is exact 90 degrees for 550nm (green) wavelength while longer (red) and shorter (blue) wavelengths have slightly different twist causing these wavelengths to leak through, thus the purple tint.
UPDATE #2: I added Canon CR2 raw files (5472 x 3648px) of each of the captures for download. These are rather large files so they are split to separate diffuse and full lighting capture downloadables (12 images total of ~250MB each).

PBR Texture Scan - Grass
CR2: [diffuse] [full]
PBR Texture Scan - Grass
CR2: [diffuse] [full]
PBR Texture Scan - Brick
CR2: [diffuse] [full]
PBR Texture Scan - Asphalt
CR2: [diffuse] [full]
PBR Texture Scan - Wood Chip
CR2: [diffuse] [full]
PBR Texture Scan - Tree Bark
CR2: [diffuse] [full]
PBR Texture Scan - Soil
CR2: [diffuse] [full]
PBR Texture Scan - Paver Tile
CR2: [diffuse] [full]
PBR Texture Scan - Paver Brick
CR2: [diffuse] [full]

To account light attenuation, diffuse-only capture of white flat surface is needed for calibration. For each pixel these calibration images encode function:

a*(\mathbf{\hat{n}} \cdot \mathbf{\hat{l}})
 
 \begin{aligned}  \mathbf{\hat{n}} &= [0, 0, 1]\\  \mathbf{\hat{l}} &= \text{light direction}\\  a &= \text{light attenuation}  \end{aligned}

I used white matte cardboard for the calibration material, and this texture should be smoothed a bit to remove minor irregularities in the surface. Since the angular and distance attenuations are low-frequency the smoothing shouldn’t cause issues. Also because the attenuation is for flat surfaces while some surfaces have height differences, these calibration textures should be applied properly in 3D, but this is something I still need to figure out how to do exactly.

PBR Texture Scanner Calibration Image
[download PNG package]


In these calibration images you may have noticed some faint brighter spots in the image (e.g. upper-right corner of 45°/225° image), which are actually reflections from the other HPL lamp filters. This was an unexpected artifact that I may need to address by applying AR-coating on the HPL lamp filters.

Tools

For building the second prototype I used quite a bit of different tools. Below is a compiled image of most of the tools, and their names and purpose for this project. I had few of the tools already, but I had to buy and learn to use many more making the tools probably the biggest investment in the project.

PBR Texture Scanner - Tools

  1. Soldering Station (Weller WLC100) + lead-free solder. Just for your everyday soldering needs. I also bought smaller tip for the soldering iron to be able to do more precise job.
  2. Welding magnifying glass with LED lamp (MG16129-C). This kind of magnifying glass is crucial for getting small things soldered properly and having an LED lamp is important to provide enough light for soldering.
  3. Digital Oscilloscope (50MHz Siglent SDS1052DL) + Probe. For this project I used the oscilloscope only to validate output signal from the FPM driver.
  4. Digital Multimeter. Just for basic measurement of resistance, volts, amps and capacitance. Just be careful when measuring amps that you connect the probe to a plug in the meter that’s able to handle the amps well in your measured range. I managed to blew up a fuse in my multimeter because of this.
  5. Dupont Wire Crimpers. For creating connectors for HPL lamps and between modules in the controller unit. You can try to use pliers instead but it’s lot easier with proper crimpers and the result is much better.
  6. Breadboard. For testing your electronics designs with jump wires before soldering anything. I also used an iPad app called iCircuit (available also for Android/Windows phones & tablets) to test some designs.
  7. Curved Tweezers. Extremely valuable for placing and holding wires for soldering.
  8. Wire Strippers. Just your regular tool for stripping wires.
  9. Side Cutters. Very useful for cutting wires since it has blades right up to the tip.
  10. Long-nose Pliers. Good generic pliers for random jobs.
  11. Pipe Wrench Pliers. Just for holding nuts white screwing bolts.
  12. Sheet Metal Cutters (straight). I needed these cutters just for making a “cookie cutter” tool from sheet metal to punch holes of correct size to the black foam for light filters. I tried to use hobby knife first, but it didn’t quite work out well enough.
  13. Staple Gun. For wiring the frame and attaching the skirt to the frame.
  14. Hobby Knife. For precision cutting of various things.
  15. Utility Knife. For cutting pieces from prototyping board for different modules. For future prototypes I would like to try home made printed circuit boards (PCB’s) instead to avoid soldering wires for prototyping boards that’s somewhat unreliable and error prone.
  16. Scratch Awl. Useful for point-making prior to drilling so that the drilling tip doesn’t wander. I also used it for punching holes through wires for wire branching in the frame.
  17. Just your regular screwdriver.
  18. Glass Cutter. To make a score on a sheet of glass for breaking. I used this to make glass squares of proper size for the light filters.
  19. Running Pliers. To break the glass along the score. You could do this simply by snapping the glass by hands after drawing the score, but I just wanted to play it safe.

Conclusion and Future Work

So far this project has been very educational to me and has potential of becoming something useful for authoring PBR textures. The major part missing is the software to extract the PBR textures from the captured images and I’m sure there’s quite a bit of work to do to get good results out from the images. There are also various improvements to do on the hardware side (e.g. improvements of the HPL lamps) and different things I would like to try out.

For example to bring the cost of building this project down I like to try replacing the FPM with a lens from 3D shutter glasses. I actually already got a pair of glasses that I ripped apart to make an alternative to the FPM I have for about $10, but I have to see how well it compares. I also received a beamsplitter and first surface mirror that I’ll be testing for capturing differently polarized images.

If you have ideas/comments/questions about this project I would like to hear them in the comments below!

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Comments (43)
  • Patapom

    December 29th, 2015 at 3:09 pm

    Nice project! Good job!
    I tried a similar thing but it was very cheap in comparison: evertyhing took place in a dark giant cardboard box but my goal was only to extract some approximate diffuse albedo values that are still quite lacking for the realtime graphics community. I’ve been collecting about 200 general materials now and that’s quite okay for a simple database for coders and artists alike…

  • Daniel Morrison

    December 30th, 2015 at 12:16 pm

    Looks like it is a pain to open those legs and difficult in vertical orientations. I’d like to see some sort of spring loading to the legs so you don’t have to pull them apart manually. Perhaps something like the way a Rifa-lite opens up.

    I’m really looking forward to the software processing side of things.

    Thanks a ton for doing all of this, it is invaluable for the rest of us! 🙂

    • December 30th, 2015 at 6:29 pm

      Yeah, it’s quite tricky particularly for those vertical surfaces and I have been thinking of having some mechanism to open the attachment more easily. Something that would push the legs open and you could collapse them back by pushing the legs together for transportation. If there are ideas I would love to hear them 🙂

    • Nick Otto

      January 6th, 2016 at 5:51 pm

      a jig that the legs slot into would be beneficial for any orientation. It will guarantee your legs are in the same position (or close to) every time.

  • Thomas Mansenal

    December 31st, 2015 at 2:35 am

    Congratulations Jarkkol quite amazing DIY project, it could have a page on Hackaday!
    I’m eager to see how the processing phase goes. I had a few questions (colour centric :)):

    – How will you proceed for the photo processing and colour calibration?
    – Have you shot any colour rendition chart (ColorChecker Passport or ColorChecker Classic)?
    – LED usually have a quite narrow-band spectrum which combine with components quality (phosphor) may yield a very low colour rendition quality:
    it would be interesting to see how your batch affects the colour rendition chart.
    – Related to the previous point I see that some of your white calibration frames exhibit a greenish tint, any clue where it is coming from?

    Keep up the good job!

    Cheers,

    • December 31st, 2015 at 3:12 pm

      Hey Thomas and thanks for the great questions (:

      Not quite sure what introduces the green tint, but I’m suspecting the light filters. I’ll make another batch of the HPL lamps and will try to improve this as well. Though there’s also some specular color tint caused by the FPM.

      I’m planning to perform the color calibration with the passport, but haven’t done it yet. I should be able to divide the passport images with the white calibration images to get the image for the color calibration. I’m hoping colors are pretty calibrated already but have to see if the LED spectrum has significant impact on the quality.

  • Pingback: Building Your Own Portable PBR Texture Scanner - Lesterbanks

  • Geordie

    January 4th, 2016 at 2:35 pm

    Great work Jarkkol! I’ve been working on a very similar project only based on an XY plotter type assembly to do more of a full 3D scan of an area along with PBR textures. Would you mind giving the link to the LEDs you purchased on ebay? Thanks!

  • Geordie

    January 4th, 2016 at 2:40 pm

    PS I’ve been using these system for prototyping:
    http://www.makerbeam.com/
    http://openbuildspartstore.com/

  • Axel

    January 8th, 2016 at 10:39 am

    Hey JarkkoL,

    I would like to know, did you get Hands on the Polarizing Filter?
    Beacause I would like to rebuild your system so that I may contribute a software to your Project 😉 Or at least I’ll try to write software for texture/ normal bump map calculation for this system. But i actually I can’t find anything like your polarizing filter yet 🙁
    Maybe we can collaborate on the software then, when i’m done building my system.

    Cya

    • January 8th, 2016 at 2:53 pm

      Hi Axel!

      For polarization switching I’m currently using FPM, but I’m also looking into other options.

      I was actually thinking of releasing some raw surface captures from my field trip so that people can try to develop the software side without having to build the hardware. These raw captures are just ~500MB per surface so I need to think the best way to host them. I have also already written some software to extract normal & diffuse albedo textures which looks promising and I’m thinking of putting the project on GitHub or something so that people can make further improvements to it.

      • Stephane-lb

        January 14th, 2016 at 6:23 am

        Hi JarkkoL,

        First all, thank you so much yp share your project it is really cool and inspiring! I love the way you use to build it. It would be perfect if you will release some of the surface somewhere ( Google Drive might e a good choice ?) and part of the code if you want to on github.

        May the force be with you,

        Stephane

  • navin verma

    April 25th, 2016 at 9:50 am

    Then How do you process these images to get diffuse, specular, normal, Translucency etc map dear ?

    there is an application named CHI’s RTI Builder and Viewer but there is no option of saving maps in 16 bit of float values. May i please get to know how and which application you use to process these images further.

    Waiting for a reply…

    • April 27th, 2016 at 10:55 pm

      I wrote my own software to extract the textures from the images. Albedo and normal maps are quite trivial to extract with the algorithm in the paper by RaD, but I haven’t yet found a good algorithm for roughness maps.

  • Zsolt

    June 15th, 2016 at 7:56 am

    hello, do you have the BOM for this project?
    thanks.
    -zs.

    • July 19th, 2016 at 9:08 pm

      Sorry no, this is just a prototype. But if I got a round implementing proper scanner I’ll provide one.

  • George R

    July 19th, 2016 at 4:43 am

    I’m thinking about replicating your setup – have you thought any further about releasing the software? Also, did you contact LC-Tec directly for the FPM? To calculate the roughness, you could use the generated normal maps, with simulated lights to (somehow) reverse engineer the broadness of the highlight to a roughness value.

    • July 19th, 2016 at 9:23 pm

      I haven’t thought of releasing the software because it’s quite unfinished and I should make 3rd prototype that’s able to deal with specular reflections better. Currently the FPM leakage is quite a problem for other than rough surfaces and the light setup doesn’t seem to have enough coverage to robustly calculate roughness (highly reflective surfaces are particularly an issue).

      For roughness I did pretty much what you proposed, i.e. evaluated a specular BRDF with different roughness values and extracted normal and picked the closest value matching the captured specular reflection. Did this for all the lights and picked the median. But there’s probably better algorithm to calculate it.

  • Chris B

    August 16th, 2016 at 9:04 am

    Hi Jarkko,
    I did some earlier researches on that with only 4 images. Script is still available; only as batch so far. Right now I’m working on it again to a newer version. just started, but your input has been mind opening. Thank you for sharing your passion. Keep us updated; maybe we can help each other. Best, Chris

    • GeorgeR

      August 20th, 2016 at 6:19 am

      Where might that script be Chris? The linear algebra stuff is way over my head but if I saw some code I might get it (and then make it gpu accelerated and give it a ui!)

  • Chris B

    August 25th, 2016 at 1:11 pm

    Hi GeorgeR,
    I think I got something wrong… I was talking about Normal Map generation from multiple images…. But I think you guys are talking about “roughness” map as image, defining the real roughness amount per pixel input. As far I understand roughness map; it is set via value between 0-1; depending on each material.
    Send me your mail address; I won’t make any code public at this point.

  • Chris B

    August 26th, 2016 at 6:01 am

    If anyone else is interested; I created my script based on this workflow (step 3+4)
    http://zarria.net/nrmphoto/nrmphoto.html
    Thanks to Richard for sharing this info with us.

    @GeorgeR – what kind of software do you use for converting the images, IM?

    • Chris B

      August 26th, 2016 at 6:07 am

      … Richard = Ryan Clark
      Got the wrong name.

  • Chris B

    August 26th, 2016 at 6:06 am

    So, as far I can tell….
    – Albedo
    – Specular
    – Normal
    .. so far these are not problematic.

    1. How to convert Normal to Displacement “average” (KNALD does the best job for now) But how to re-code this with IM for example.
    2. Getting the correct value for each material

    Anyone some news to share about these processes?

    • August 27th, 2016 at 11:24 pm

      Capturing specular is tricky though, particularly for shiny surfaces. If you have only few point lights like I do (12), then for shiny surfaces a pixel may appear completely black for all the light sources. This is also what Paul Debevec talks about in this recent video about Light Stage: https://www.youtube.com/watch?v=c6QJT5CXl3o

      Interestingly Paul talks also about cross polarization and how the polarization switching is done at light instead of at the camera, so I was somewhat on the right track for my first polarization switching idea (: I think this works for him though because the lights are relatively far from the captured subject that there’s no notable artifacts.

  • Chris

    September 5th, 2016 at 10:32 pm

    Thanks JarkkoL – great video with a lot of details
    I’m also on a hunt; regarding roughness measurement:
    http://archives.sensorsmag.com/articles/0499/0499_58/

    • September 6th, 2016 at 9:02 am

      I believe if you could capture non-zero specular illumination from multiple light sources for a known normal, then you could calculate roughness for a given BRDF. Problem is, how would you do this for all pixels in the image.

      There’s a paper “Practical SVBRDF Capture In The Frequency Domain” from SIGGRAPH 2013 about using LCD panel and DSLR camera for SVBRDF capture, but it would be difficult to build this in a portable setup: https://mediatech.aalto.fi/publications/graphics/FourierSVBRDF/aittala2013_siggraph_lowres.pdf

  • Chris

    September 6th, 2016 at 12:19 pm

    Nice – didn’t know about the “two shot” option.
    I’ve found a newer article on this:
    https://mediatech.aalto.fi/publications/graphics/TwoShotSVBRDF/aittala2015_siggraph.pdf

    I’ve found something else to measure the roughness: http://www.ismap.com/manufacturers/z187a5ac/iz2120b04-surface_roughness_tester.html

    But it would be great getting it straight from the images. @JarkkoL: Can you explain your idea a bit more in detail. Do you want to calculate the Albedo (withoutSpec) agains the Normal (multiple lights)?

    • September 8th, 2016 at 10:10 pm

      The two-shot solution is restricted to texture-like captures that exhibit self-similarity as said in the abstract, so it’s not really a generic solution. I haven’t read it through because I want a generic solution, but there might be some interesting ideas for the generic approach nevertheless.

  • Chris

    September 7th, 2016 at 10:54 pm

    Would you mind to share test test-images you shot in raw format (12 pics)?
    I did some scripting over night; also getting out the normal + albedo (incl. specular). so far only for one set of 4 images. …waiting for the led’s to arrive.

    Did you had some time to test polarized images any further? I can’t wait to start shooting with the “new” rig.

    How did you “combine” the 12 different images to one normal? I’m thinking of multiple options which might create different results.

    • September 12th, 2016 at 2:28 pm

      I added links under each capture image to download the Canon CR2 raw files. These are pretty big files (~250MB each). There’s also link to download the calibration images.

  • ChenA

    September 24th, 2016 at 3:02 am

    Can you share the HPL Matrix Driver’s circuit diagram?
    Thanks.

  • andy

    September 29th, 2016 at 8:58 pm

    Hey Jarkkol,

    Really cool video and thanks for sharing the process. Could you share the process of deriving the Specular and the Normal map please?

    Are you taking all your images and subtracting the Polarized image from the Diffuse (Albedo) image and then taking the entire set and averaging them out?

    Thanks again!

    Andy

    • October 2nd, 2016 at 12:50 pm

      Thanks Andy!

      To get specular-only reflectance of given image, I simply subtract diffuse image from “full lighting” image. However, this image isn’t something you could just throw into a PBR renderer but you have to use all these specular-only images to extract roughness map and I don’t have good method for it yet.

      For albedo and normal I’m using the technique described in the RaD paper with some modifications. Getting proper diffuse albedo is more complex than just averaging all the diffuse images.

  • Dmitry

    February 12th, 2017 at 7:58 am

    Nice! I made a similar one over at Visceral Games just a while back but its designed was more driven by simulation and software. I posted it on my Facebook and didn’t have any time to write anything yet. Though I was going back and forth between software and hardware while working on it.

    Original post: https://www.facebook.com/andcoder/posts/10211157429150608?pnref=story
    And a few commens and sample output data in the comment section in there:
    https://www.facebook.com/photo.php?fbid=10211157398429840&set=pcb.10211157429150608&type=3&theater

    It was designed specifically to capture surface micro details with as little custom hardware as possible so I only drive 1 W LEDs with 170 mA current, 13 clusters, two degrees of polarization.

    Software solver dictated a few design choices there that is to place lights such that they cast as many micro shadows as possible. Classical photometric technique relies on capturing just normals but that’s not enough to reconstruct and render surface accurately. So micro shadows play an important role there as you can derive a lot more detail about the surfaces.

    Also, for modern PRB you need full BRDF fitting with Reflectance/Roughness/Normals, fitting something like GGX BRDF so classical analytical approach with simple N.L doesn’t work. Originally I did full search over BRDF parameter space on GPU but recently, with some minor modifications to BRDFs started using gradient descent to get the same output much more quickly without doing full search.

    That’s the reason I have a lot more directions at grazing angles – 8, 4 at 45 degrees and only one from the top. In practice though, very often we won’t even use the top direction. If you have very glossy surface you want to do off-specular-peak fitting and direct specular reflection doesn’t help there.

    Also, to make solver more robust, it’s much easier to assume directional lighting only, or a point light that’s far from the surface. That make footprint of the scanner larger but it’s much easier to deal with or else you get non uniform sampling across the frame and end up with very biased data – no good.

    So I am not sure you will get any useful data out of your arrangement as it seems that 8 out of 12 light directions are kinda wasteful and there are not far enough from the scanning patch to make directional lighting assumption.

    But I like the idea of umbrella-like rig though I would need to be at least 2-3x times larger.

  • JamesDu

    February 28th, 2017 at 3:19 am

    Hi JarkkoL,

    I want to know you used led stage polarizer filter is that?
    http://polarization.com/polarshop/product_info.php?cPath=21&products_id=28

    Thankyou!

  • mo

    February 28th, 2017 at 2:34 pm

    hi ,
    can you provide the FPM Controller parts or how to make it ?
    great work and thank you for this post

Leave Comment

We do not share your information with anyone else or publish your email. *Required fields.

Top