Project Icon

PaintMixing

基于光学原理的颜料混合数值优化工具

这是一个探索颜料混合科学原理的开源项目。它开发了虚拟颜料混合工具,结合光学理论和数值优化算法,能根据目标颜色生成混合配方。通过融合计算机图形学和微缩模型绘画知识,该项目为用户提供了一个数字实验平台,帮助增进对颜色混合的理解。它为艺术家和爱好者提供了实用的工具,有助于提升颜色混合的直觉和技巧。

On Light, Colors, Mixing Paints, and Numerical Optimization.

This is a short write-up that is supposed to serve as a rough description of what's going on in the paint mixing tool in this depot.

The tool is a virtual paint mixing tool and a solver that can generate recipes for creating a particular color out of existing paints. The tool comes with data for Kimera paints that I measured. The tool is a Python 3 program; it comes with all the sources, and if you have a Python distribution, you can just run it. There's also a Windows executable created with PyInstaller (see 'Releases', on the right). I can probably create a MacOS version too, if need be (edit: I actually added one; there's a .dmg file, and it does have something in it, and if you double-click it, it does show up, so it seems to work, but honestly, I barely use Mac, so it's hard for me to say if this is the right way, or is something more expected...)

If you just want to grab the tool and play with it, that's about it! Have fun, and I hope you find it at least somewhat useful.

But below, you'll find a more or less complete description of how it works (and when it doesn't). So, if you have a bit of time to spare, read on!

Introduction

Very recently, I discovered miniature painting. I was never really into WH40K or anything related, but I have some fond memories of playing pen & paper RPGs years ago, and after watching a bunch of YouTube videos, I thought it looked easy enough to try. I still suck at it, but I somehow really enjoy the tranquilizing experience of putting thin layers of paint onto 3 cm tall figurines.

In my day job, I do real-time graphics engineering for video games, and it quickly turned out that a lot of the problems I deal with at work are very similar to those in miniature painting: you analyze how light behaves, how it interacts with different surfaces, how the eyes perceive it, etc. Of course, painting is not just engineering; it's Art after all (capital 'A'), but there seems to be a consensus that painters should understand these technical aspects, even just to know when they deliberately break them.

There were a number of things that looked like fascinating research projects somewhere between miniatures and computer graphics, but one thing that sparked my particular interest was paints. Miniature paints usually come with these cryptic names: Skrag Brown, Tombstone Horror, or whatever. I don't really mind, but the producers never actually tell you what these colors actually are. And when you have limited experience, it's often hard to tell if a particular paint will work as some midtone or if it will be too dark. Many YouTube tutorial videos actually tell you which exact paints they use, but they most often come from different lines, some are immediately available, some are not, and for some, you need to wait - and I want to paint this very second! It seemed pretty clear that instead of buying all the possible paints, the more reasonable approach would be to pick some base paints and learn to mix them to get the colors that I need.

For a beginner, there are, however, two problems. First: mixing paint is not a particularly intuitive process: sometimes you get something reasonable, sometimes you get muddy brown. Second: you need to know what color you actually want to get. Sure, there are some nice videos on how to color match, but if you don't have a good intuition of what skin tone you want to achieve, it's hard to tell if your mix needs more blue or red.

Because of my engineering background, the solution seemed obvious: I would like to just pick a color on the screen (from, say, a photograph) and I want to know which paints, and how much of them, to mix to get it. I would also like to experiment with mixing paints without actually having to waste physical paint. For that, I need to somehow characterize the paints that I have, I need a model for simulating how they mix, and I need a numerical solver that will be able to minimize the error between the color that I want and a mix of some number of paints. These sorts of processes are something that I regularly go through and enjoy, so it looked like a perfect on-the-side project.

Disclaimer here: yes, I know that in practice no one works that way. Especially if the solver gives you ratios like 88 parts of white, 3 parts of blue, and 2 parts of yellow - there's no way to mix something like this on a wet palette where you work with a minuscule amount of paint. But, at least to me, it's still useful to know that it's mostly white, with a touch of blue and yellow, so when I mix something on the palette, I'm not doing it completely blind. And yes, if you've been painting for some time, you learn these things, you get that intuition. But you need to get it somehow. Painting takes a lot of practice, so if I can do some experiments purely digitally, I'm totally up for it. And to be honest, it's all more of a cool side project rather than anything else.

Just in case anyone else finds it useful, I thought I'll write up all the theoretical basis for a simple tool I developed for this and provide it together with a simple Python code. Since I just had a bit of free time (I got COVID), and I just got my set of Kimera paints (which are single pigment, so incredibly saturated colors, amazing for mixing), I spent a week on this, and you can read about the results here. As it might be read by people with a less technical background, I'm trying to keep it all pretty simple and self-contained, so all the information you need to understand it is here. I'm not sure how that worked out in the end, but if something is unclear, feel free to ping me and ask for details. None of it is actually any rocket science; it's mostly some high-school level math and physics (but if you're allergic, a warning: there is some math in there).

So if you're curious about how it all works, details below.

Light

Light is an electromagnetic wave, oscillations of electrical and magnetic fields propagating in space. Human eyes are sensitive to wavelengths between roughly 400 and 700 nanometers, which we perceive as colors, from violet, through blue, green, yellow, orange, to red.

Light that usually reaches our eyes is a mixture of many different wavelengths. Depending on the ratios between the amounts of particular wavelengths, we perceive the light as different colors. If it consists mostly of the shorter visible wavelengths, we'll see it as blue. If it's mostly longer wavelengths, it's going to be red. The more precise details are further down, but that's the general intuition.

To reason about these characteristics in a more principled way, one useful tool is a so-called spectral power distribution (SPD for short). It's a function that, roughly speaking, describes how much of a particular wavelength is present in some radiation. It is usually plotted as a graph, with wavelength on the horizontal axis and some energy-related quantity on the vertical axis (so the stronger a particular wavelength is, the higher the plot).

So the "generally blue" light might have an SPD like this:

and the "generally red" light might have it more like this:

One particularly interesting family of SPDs are those of different light sources. You can take any light source and measure how much of the light it produces comes from particular wavelengths. There's this thing in physics called black body radiation that describes the SPD of a perfectly black body (so that it doesn't reflect any light, just generates it) heated to a particular temperature (all that actually led straight to quantum mechanics and the world we know today; actually analyzing spectra of starlight led to the understanding that the distant stars produce energy just like the sun, the lines appearing in the spectra of excited gases was another catalyst in the evolution of quantum mechanics, and shift in the spectra of the light from different galaxies led to the discovery that the universe is expanding; it's all in the spectrum). If you've ever come across these "2700K light", "5000K light" markings, they are exactly that - they describe the light color as the color of a black body radiator of a given temperature, in Kelvin.

The SPD of a typical ~2800K incandescent light looks like this:

and a ~4200K fluorescent light looks like this.

The SPD of what we consider sunlight is a fairly complex distribution that includes not only the actual SPD of the light generated by the sun but also the absorption and scattering of some of it occurring when the light passes through the atmosphere. Because the sunlight encounters different amounts of atmosphere on its way at different times of the day (less at noon, more in the evening), the sunlight SPD also depends on the time of day and atmospheric conditions. On a typical sunny day, it might look like this:

Because we're doing science here, everything needs to be standardized, measured, and quantified. For that reason, the CIE (International Commission on Illumination) introduced "standard illuminants" - a number of SPDs describing some very particular lights. Illuminant A represents a typical tungsten filament light bulb, a black-body radiator at 2856K. Illuminants B and C have become pretty much obsolete in favor of Illuminants D. There's a whole family of these; they describe daylight in different conditions - from more "warm" ones (D50, D55) to colder ones (D65, D70). The numbers 50/55/65/70 roughly correspond to a black-body temperature that would emit light of similar color (5000K, 5500K, 6500K, 7000K), but it's a longer topic and not particularly relevant here. There are also other illuminants (like E and F), but in most practical situations, the interesting ones are A, D50, and D65 (especially the last one).

One last thing that seems fairly obvious, but is very important later on: light behavior is linear (in mathematical terms). If you take two lights with two SPDs and you turn them on at the same time, the resulting lighting will have an SPD that's the sum of the two components. If you make the light twice as bright, the resulting SPD will be two times greater.

Reflection

We rarely see light as it is generated by some source. Before it reaches our eyes, it usually bounces off things, and we register that indirect, reflected light.

The way light interacts with surfaces is an incredibly complicated topic. The most basic principle is fairly simple and described by Fresnel's equations: light reaches the boundary between two mediums (say, air and an object) and some of it gets reflected off the boundary, and some of it gets refracted into the object. The angle between the reflected light and the normal to the surface (which is a direction perpendicular to the surface) is the same as the angle between the incident light and the normal (alpha on the figure below). How much of the light goes where, and the exact direction of the refracted light, depends on the index of refraction of both mediums (which describes how fast light travels in that particular medium compared to its speed in vacuum).

Unfortunately, this only describes reflection off a perfectly smooth, mirror surface - nothing like anything you see in reality. And it only describes the first reflection off a boundary. But light can bounce around off the microscopic roughnesses of the surface and go into the object in a different place. Or it can do it multiple times. Or it can go into the object, bounce around there, and go out (or not, there's a boundary when going outside to the air as well). And all this is ignoring any wave phenomena - diffraction, interference, etc.

Physics, optics, and related fields have tried to simplify all these concepts and created multiple models for describing and quantifying these effects. Some are simpler, some are very complex. Computer graphics loves them because they allow us to render realistic-looking images on a computer.

From the perspective of a miniature painter, the simplest way of looking at the light-material interaction is to split it into two components. I will call them "diffuse" and "specular" because these are the terms used in computer graphics, which I'm used to.

The "specular" component of the reflection is everything that happens on the actual boundary between the air and the object. Some of the light bounces off it. Generally, it follows the law of reflection, so the angle between the direction the light falls onto the object and the normal is the same as the angle between the direction the light is reflected at and the normal (the normal is the direction perpendicular to the surface). I say "generally" because if the surface is not perfectly smooth, the light will be scattered in different directions - the rougher the surface, the more scattered it will be - but generally, it will be around that reflected direction. One very important bit: in the case of non-metal materials, light reflected this way does not change its color. The reflected light will have the same SPD as the one falling on the object. Interestingly, this behavior is very similar for most non-metals. To the extent that in computer graphics, we often just treat all non-metal surfaces the same way: they can be rough or smooth, but they reflect the same amount of light: no matter if it's plastic, skin, or concrete. It's a very decent approximation. Metals, due to their atomic structure, are different. When the light reflects off their surface in a specular reflection, it actually changes color. That's why gold is yellow and copper is orange.

The "diffuse" component is all the light that goes into the object and then some of it gets out and is, generally, scattered uniformly in all directions (or rather: let's just assume it for simplicity, that's a good enough approximation). It doesn't matter how the surface is viewed; its diffuse lighting is the same from all angles (unlike specular, which is strongly visible when viewed from that one particular direction, and not much when viewed from others). Not all the light that gets into the object gets scattered out. Some of it is absorbed and turned into heat. There's also an important difference between metals and non-metals when it comes to the diffuse component. For non-metals, the spectrum of the reflected light very much depends on the object: after all, the light goes into the surface, bounces around, and comes out - so it picks up some of the characteristics of the object. For metals, there's no diffuse component at all. The light doesn't go out; it's either absorbed or bounces off specularly. And even though, technically, the diffuse component is not "reflection", but rather a form of scattering occurring over short distances within the material, I will oftentimes just say "diffuse reflection" for simplicity.

Side note: even though it's safe to treat the diffuse reflection as the same in all directions, its brightness still depends on the amount of light falling onto the given part of the object. And this amount is related to the angle between the normal of the surface and the direction towards the light: the bigger the angle, the less light the given part of the object receives (actually, the amount of light is the same; it's just distributed over a larger area, which makes "light per area" smaller, and this is what we consider "brightness"). In physics, this is called Lambert's law. In many miniature painting tutorials, people talk about shading basic shapes in a particular way: spheres have round shading towards the light, cylinders have highlights along the axis, and cubes/surfaces have generally flat lighting, depending on their orientation. This is a practical application of Lambert's law. Spheres have smoothly changing normals in all possible directions, so their brightest area is going to be in sections facing the light. Normals of a cylinder change as you go around the circle but are the same as you move along the axis, so the entire length of the cylinder is shaded the same. Flat planes have a constant normal, so every point gets the same lighting.

(Side note to the side note: this is, of course, only true if we assume that the direction towards the light is the same on the entire surface! But is it? If we consider sunlight, the source is so far away that we can safely think that yes, every point gets the light from the same direction - it's a directional light. But for other sources, the position of the light matters too. And if the light is not a simple point but rather something larger, it all gets complicated even more).

For paints, we can focus on the diffuse component only. Miniature paints dry pretty matte, so the surface of the dried paint is pretty rough. The specular component of the reflection is very faint and does brighten the surface a bit with light color, but the main characteristics of the appearance come from the diffuse part. I'll go into some theories describing what is going on with light in the paint layer later on, but for now, we can look at the macroscopic effect: light with some particular SPD falls onto the paint layer, and some of it gets scattered uniformly in all directions, and some get absorbed. We can compute the ratio of that scattered light to the incident light. This is called reflectance

项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

稿定AI

稿定设计 是一个多功能的在线设计和创意平台,提供广泛的设计工具和资源,以满足不同用户的需求。从专业的图形设计师到普通用户,无论是进行图片处理、智能抠图、H5页面制作还是视频剪辑,稿定设计都能提供简单、高效的解决方案。该平台以其用户友好的界面和强大的功能集合,帮助用户轻松实现创意设计。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号