Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 Usage  





2 Features  





3 Assets  





4 Techniques  



4.1  Scanline rendering and rasterization  





4.2  Ray casting  





4.3  Ray tracing  





4.4  Neural rendering  







5 Radiosity  





6 Sampling and filtering  





7 Optimization  





8 Academic core  



8.1  The rendering equation  





8.2  The bidirectional reflectance distribution function  





8.3  Geometric optics  





8.4  Visual perception  







9 Chronology of concepts  





10 See also  





11 References  





12 Further reading  





13 External links  














Rendering (computer graphics)






العربية
Azərbaycanca

Български
Bosanski
Català
Čeština
Dansk
Deutsch
Ελληνικά
Español
Euskara
فارسی
Français
Galego

Հայերեն
ि
Bahasa Indonesia
Italiano
Latviešu
Bahasa Melayu
Nederlands

Norsk bokmål
Oʻzbekcha / ўзбекча
Polski
Português
Русский
Slovenčina
Slovenščina
Suomi
Svenska

Türkçe
Українська
Tiếng Vit


 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 




In other projects  



Wikimedia Commons
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 


A variety of rendering techniques applied to a single 3D scene
An image created by using POV-Ray 3.6

Renderingorimage synthesis is the process of generating a photorealisticornon-photorealistic image from a 2Dor3D model by means of a computer program.[citation needed] The resulting image is referred to as a rendering. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, textures, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital imageorraster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

Asoftware applicationorcomponent that performs rendering is called a rendering engine,[1] render engine, rendering system, graphics engine, or simply a renderer.

Rendering is one of the major sub-topics of 3D computer graphics, and in practice it is always connected to the others. It is the last major step in the graphics pipeline, giving models and animation their final appearance. With the increasing sophistication of computer graphics since the 1970s, it has become a more distinct subject.

Rendering has uses in architecture, video games, simulators, movie and TV visual effects, and design visualization, each employing a different balance of features and techniques. A wide variety of renderers are available for use. Some are integrated into larger modeling and animation packages, some are stand-alone, and some are free open-source projects. On the inside, a renderer is a carefully engineered program based on multiple disciplines, including light physics, visual perception, mathematics, and software development.

Though the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image on a screen from a 3D representation stored in a scene file are handled by the graphics pipeline in a rendering device such as a GPU. A GPU is a purpose-built device that assists a CPU in performing complex rendering calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software must solve the rendering equation. The rendering equation does not account for all lighting phenomena, but instead acts as a general lighting model for computer-generated imagery.

In the case of 3D graphics, scenes can be pre-rendered or generated in realtime. Pre-rendering is a slow, computationally intensive process that is typically used for movie creation, where scenes can be generated ahead of time, while real-time rendering is often done for 3D video games and other applications that must dynamically create scenes. 3D hardware accelerators can improve realtime rendering performance.

Usage[edit]

When the pre-image (awireframe sketch usually) is complete, rendering is used, which adds in bitmap texturesorprocedural textures, lights, bump mapping and relative position to other objects. The result is a completed image the consumer or intended viewer sees.

For movie animations, several images (frames) must be rendered, and stitched together in a program capable of making an animation of this sort. Most 3D image editing programs can do this.

Features[edit]

A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together.

Assets[edit]

CAD libraries can have assets such as 3D models, textures, bump maps, HDRIs, and different Computer graphics lighting sources to be rendered.[2]

Techniques[edit]

Many rendering algorithms have been researched, and software used for rendering may employ a number of different techniques to obtain a final image.

Choosing how to render a scene usually involves a trade-off between speed and realism (although realism is not always desired). The techniques developed over the years follow a loose progression, with more advanced methods becoming practical as computing power and memory capacity increased.

An important distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. For simple scenes, object order is usually more efficient, as there are fewer objects than pixels.

Rasterization (including scanline rendering)
Geometrically projects objects in the scene to an image plane. Different realistic or stylized effects can be obtained by coloring the pixels covered by the objects in different ways. Surfaces are typically divided into meshes of triangles before being rasterized. Rasterization is usually synonymous with "object order" rendering (as described above).
Ray casting
Uses geometric formulas to compute the first object that a ray intersects.[3]: 8  It can be used to implement "image order" rendering by casting a ray for each pixel, and finding a corresponding point in the scene. Ray casting is a fundamental operation used for both graphical and non-graphical purposes,[4]: 6  e.g. determining whether a point is in shadow, or checking what an enemy can see in a game.
Ray tracing
Simulates the bouncing paths of light caused by specular reflection and refraction, requiring a varying number of ray casting operations for each path. Advanced forms use Monte Carlo techniques to render effects such as area lights, depth of field, blurry reflections, and soft shadows, but computing global illumination is usually in the domain of path tracing.[3]: 9-13 [5]
Path tracing
Uses Monte Carlo integration with a simplified form of ray tracing, computing the average brightness of a sample of the possible paths that a photon could take when traveling from a light source to the camera (for some images, thousands of paths need to be sampled per pixel[4]: 8 ). It was introduced as a statistically unbiased way to solve the rendering equation, giving ray tracing a rigorous mathematical foundation.[6][3]: 11-13 
Radiosity
Afinite element analysis approach that breaks surfaces in the scene into pieces, and estimates the amount of light that each piece receives from light sources, or indirectly from other surfaces. Once the irradiance of each surface is known, the scene can be rendered using rasterization or ray tracing.[7]: 888-890, 1044-1045 

Each of the above approaches has many variations, and there is some overlap. Path tracing may be considered either a distinct technique or a particular type of ray tracing.[7]: 846, 1021  Note that the usage of terminology related to ray tracing and path tracing has changed significantly over time.[3]: 7 

Rendering of a fractal terrain by ray marching

Ray marching is a family of algorithms, used by ray casting, for finding intersections between a ray and a complex object, such as a volumetric dataset or a surface defined by a signed distance function. It is not, by itself, a rendering method, but it can be incorporated into ray tracing and path tracing, and is used by rasterization to implement screen-space reflection and other effects.[3]: 13 

A technique called photon mappingorphoton tracing uses forward ray tracing (also called particle tracing), tracing paths of photons from a light source to an object, rather than backward from the camera. The additional data collected by this process is used together with conventional backward ray tracing or path tracing.[7]: 1037-1039  Rendering a scene using only forward ray tracing is impractical, even though it corresponds more closely to reality, because a huge number of photons would need to be simulated, only a tiny fraction of which actually hit the camera.[8]: 7-9 

Real-time rendering, including video game graphics, typically uses rasterization, but increasingly combines it with ray tracing and path tracing.[4]: 2  To enable realistic global illumination, real-time rendering often relies on pre-rendered ("baked") lighting for stationary objects. For moving objects, it may use a technique called light probes, in which lighting is recorded by rendering omnidirectional views of the scene at chosen points in space (often points on a grid to allow easier interpolation). These are similar to environment maps, but typically use a very low resolution or an approximation such as spherical harmonics.[9] (Note: Blender uses the term 'light probes' for a more general class of pre-recorded lighting data, including reflection maps.[10])

A low quality rasterized image, rendered by Blender's EEVEE renderer with low shadow map resolution and a low-resolution mesh
  • 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The image is speckled with many white dots, especially in the shadowed areas, due to low quality settings in the renderer. The reflection, transparency, and lighting are realistic, but the speckles distract from this.
    A low quality path traced image, rendered by Blender's Cycles renderer with only 16 sampled paths per pixel and a low-resolution mesh
  • 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The outlines are angular and there are some defects (due to the low-resolution mesh of the models), and the transparent cow has no shadow.
    A ray traced image, using the POV-Ray program (using only its ray tracing features) with a low-resolution mesh
  • 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The outlines of the cows and the shadows are smooth with no blockiness or angular defects, and the reflection looks quite realistic, but the transparency does not look convincing, and the lighting in the shadowed areas of the cows is not quite realistic.
    A higher quality rasterized image, using Blender's EEVEE renderer with light probes
  • 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The outlines of the cows and the shadows are smooth with no blockiness or angular defects. There are a few speckles of white pixels, but far fewer than in the low-quality image. The reflection, transparency, and lighting look realistic.
    A higher quality path traced image, using Blender's Cycles renderer with 2000 sampled paths per pixel
  • 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The outlines of the cows and the shadows are smooth with no blockiness or angular defects. The lighting is realistic, including in the shadowed areas. The base surface is illuminated by bright spots and lines ("caustics") caused by light being focused by the reflective and transparent cows.
    An image rendered using POV-Ray's ray tracing, radiosity and photon mapping features
  • 3D rendered image showing three copies of a cartoon cow. The one on the left has a metalic surface, and the one on the right uses a transparent glass material. The cow in the center appears made of glazed porcelain. The cows are standing on a wooden table. Lights and other background details from a cafe environment are reflected in the slightly glossy table and the cows.
    A more realistic path traced image, using Blender's Cycles renderer with image-based lighting
  • 3D rendered image showing three copies of a cartoon cow. The one on the left has a mirror surface, and the one on the right uses a transparent glass material. The base surface is illuminated by finely detailed bright spots and lines ("caustics") caused by light being focused by the reflective and transparent cows. The caustics are colorful in some places, due to chromatic dispersion.
  • Scanline rendering and rasterization[edit]

    Rendering of the Extremely Large Telescope

    A high-level representation of an image necessarily contains elements in a different domain from pixels. These elements are referred to as primitives. In a schematic drawing, for instance, line segments and curves might be primitives. In a graphical user interface, windows and buttons might be the primitives. In rendering of 3D models, triangles and polygons in space might be primitives.

    If a pixel-by-pixel (image order) approach to rendering is impractical or too slow for some task, then a primitive-by-primitive (object order) approach to rendering may prove useful. Here, one loop through each of the primitives, determines which pixels in the image it affects, and modifies those pixels accordingly. This is called rasterization, and is the rendering method used by all current graphics cards.

    Rasterization is frequently faster than pixel-by-pixel rendering. First, large areas of the image may be empty of primitives; rasterization will ignore these areas, but pixel-by-pixel rendering must pass through them. Second, rasterization can improve cache coherency and reduce redundant work by taking advantage of the fact that the pixels occupied by a single primitive tend to be contiguous in the image. For these reasons, rasterization is usually the approach of choice when interactive rendering is required; however, the pixel-by-pixel approach can often produce higher-quality images and is more versatile because it does not depend on as many assumptions about the image as rasterization.

    The older form of rasterization is characterized by rendering an entire face (primitive) as a single color. Alternatively, rasterization can be done in a more complicated manner by first rendering the vertices of a face and then rendering the pixels of that face as a blending of the vertex colors. This version of rasterization has overtaken the old method as it allows the graphics to flow without complicated textures (a rasterized image when used face by face tends to have a very block-like effect if not covered in complex textures; the faces are not smooth because there is no gradual color change from one primitive to the next). This newer method of rasterization utilizes the graphics card's more taxing shading functions and still achieves better performance because the simpler textures stored in memory use less space. Sometimes designers will use one rasterization method on some faces and the other method on others based on the angle at which that face meets other joined faces, thus increasing speed and not hurting the overall effect.

    Ray casting[edit]

    Inray casting the geometry which has been modeled is parsed pixel by pixel, line by line, from the point of view outward, as if casting rays out from the point of view. Where an object is intersected, the color value at the point may be evaluated using several methods. In the simplest, the color value of the object at the point of intersection becomes the value of that pixel. The color may be determined from a texture-map. A more sophisticated method is to modify the color value by an illumination factor, but without calculating the relationship to a simulated light source. To reduce artifacts, a number of rays in slightly different directions may be averaged.

    Ray casting involves calculating the "view direction" (from camera position), and incrementally following along that "ray cast" through "solid 3d objects" in the scene, while accumulating the resulting value from each point in 3D space. This is related and similar to "ray tracing" except that the raycast is usually not "bounced" off surfaces (where the "ray tracing" indicates that it is tracing out the lights path including bounces). "Ray casting" implies that the light ray is following a straight path (which may include traveling through semi-transparent objects). The ray cast is a vector that can originate from the camera or from the scene endpoint ("back to front", or "front to back"). Sometimes the final light value is derived from a "transfer function" and sometimes it's used directly.

    Rough simulations of optical properties may be additionally employed: a simple calculation of the ray from the object to the point of view is made. Another calculation is made of the angle of incidence of light rays from the light source(s), and from these as well as the specified intensities of the light sources, the value of the pixel is calculated. Another simulation uses illumination plotted from a radiosity algorithm, or a combination of these two.

    Ray tracing[edit]

    Spiral Sphere and Julia, Detail, a computer-generated image created by visual artist Robert W. McGregor using only POV-Ray 3.6 and its built-in scene description language

    Ray tracing aims to simulate the natural flow of light, interpreted as particles. Often, ray tracing methods are utilized to approximate the solution to the rendering equation by applying Monte Carlo methods to it. Some of the most used methods are path tracing, bidirectional path tracing, or Metropolis light transport, but also semi realistic methods are in use, like Whitted Style Ray Tracing, or hybrids. While most implementations let light propagate on straight lines, applications exist to simulate relativistic spacetime effects.[11]

    In a final, production quality rendering of a ray traced work, multiple rays are generally shot for each pixel, and traced not just to the first object of intersection, but rather, through a number of sequential 'bounces', using the known laws of optics such as "angle of incidence equals angle of reflection" and more advanced laws that deal with refraction and surface roughness.

    Once the ray either encounters a light source, or more probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and the changes along the way through the various bounces evaluated to estimate a value observed at the point of view. This is all repeated for each sample, for each pixel.

    Indistribution ray tracing, at each point of intersection, multiple rays may be spawned. In path tracing, however, only a single ray or none is fired at each intersection, utilizing the statistical nature of Monte Carlo experiments.

    As part of the approach known as physically based rendering, path tracing has become the dominant technique for rendering realistic scenes, including effects for movies.[12] For example, the popular open source 3D software Blender uses path tracing in its Cycles renderer.[13] Images produced using path tracing for global illumination are generally noisier than when using radiosity (the main competing algorithm), but radiosity can be difficult to apply to complex scenes and is prone to artifacts that arise from using a tessellated representation of irradiance.[12][7]: 975-976, 1045 

    Path tracing's relative simplicity and its nature as a Monte Carlo method (sampling hundreds or thousands of paths per pixel) make it attractive to implement on a GPU, especially on recent GPUs that support ray tracing acceleration technology such as Nvidia's RTX and OptiX.[14] Many techniques have been developed to denoise the output of path tracing, reducing the number of paths required to achieve acceptable quality, at the risk of losing some detail or introducing small-scale artifacts that are more objectionable than noise;[15][16] neural networks are now widely used for this purpose.[17][18][19]

    Advances in GPU technology have made real-time ray tracing possible in games, although it is currently almost always used in combination with rasterization.[4]: 2  This enables visual effects that are difficult with only rasterization, including reflection from curved surfaces and interreflective objects,[20]: 305  and shadows that are accurate over a wide range of distances and surface orientations.[21]: 159-160  Ray tracing support is included in recent versions of the graphics APIs used by games, such as DirectX, Metal, and Vulkan.[22]

    Neural rendering[edit]

    Neural rendering is a rendering method using artificial neural networks.[23][24] Neural rendering includes image-based rendering methods that are used to reconstruct 3D models from 2-dimensional images.[23]One of these methods are photogrammetry, which is a method in which a collection of images from multiple angles of an object are turned into a 3D model. There have also been recent developments in generating and rendering 3D models from text and coarse paintings by notably Nvidia, Google and various other companies.

    Radiosity[edit]

    Radiosity is a method which attempts to simulate the way in which directly illuminated surfaces act as indirect light sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the 'ambience' of an indoor scene. A classic example is a way that shadows 'hug' the corners of rooms.

    The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a large spectrum of directions and illuminates the area around it.

    The simulation technique may vary in complexity. Many renderings have a very rough estimate of radiosity, simply illuminating an entire scene very slightly with a factor known as ambiance. However, when advanced radiosity estimation is coupled with a high quality ray tracing algorithm, images may exhibit convincing realism, particularly for indoor scenes.

    In advanced radiosity simulation, recursive, finite-element algorithms 'bounce' light back and forth between surfaces in the model, until some recursion limit is reached. The colouring of one surface in this way influences the colouring of a neighbouring surface, and vice versa. The resulting values of illumination throughout the model (sometimes including for empty spaces) are stored and used as additional inputs when performing calculations in a ray-casting or ray-tracing model.

    Due to the iterative/recursive nature of the technique, complex objects are particularly slow to emulate. Prior to the standardization of rapid radiosity calculation, some digital artists used a technique referred to loosely as false radiosity by darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via self-illumination or diffuse mapping for scanline rendering. Even now, advanced radiosity calculations may be reserved for calculating the ambiance of the room, from the light reflecting off walls, floor and ceiling, without examining the contribution that complex objects make to the radiosity – or complex objects may be replaced in the radiosity calculation with simpler objects of similar size and texture.

    Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful for all viewpoints. If there is little rearrangement of radiosity objects in the scene, the same radiosity data may be reused for a number of frames, making radiosity an effective way to improve on the flatness of ray casting, without seriously impacting the overall rendering time-per-frame.

    Because of this, radiosity is a prime component of leading real-time rendering methods, and has been used from beginning-to-end to create a large number of well-known recent feature-length animated 3D-cartoon films.

    Sampling and filtering[edit]

    One problem that any rendering system must deal with, no matter which approach it takes, is the sampling problem. Essentially, the rendering process tries to depict a continuous function from image space to colors by using a finite number of pixels. As a consequence of the Nyquist–Shannon sampling theorem (or Kotelnikov theorem), any spatial waveform that can be displayed must consist of at least two pixels, which is proportional to image resolution. In simpler terms, this expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that are smaller than one pixel.

    If a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly aliasing to be present in the final image. Aliasing typically manifests itself as jaggies, or jagged edges on objects where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce good-looking images) must use some kind of low-pass filter on the image function to remove high frequencies, a process called antialiasing.

    Optimization[edit]

    Due to the large number of calculations, a work in progress is usually only rendered in detail appropriate to the portion of the work being developed at a given time, so in the initial stages of modeling, wireframe and ray casting may be used, even where the target output is ray tracing with radiosity. It is also common to render only parts of the scene at high detail, and to remove objects that are not important to what is currently being developed.

    For real-time, it is appropriate to simplify one or more common approximations, and tune to the exact parameters of the scenery in question, which is also tuned to the agreed parameters to get the most 'bang for the buck'.

    Academic core[edit]

    The implementation of a realistic renderer always has some basic element of physical simulation or emulation – some computation which resembles or abstracts a real physical process.

    The term "physically based" indicates the use of physical models and approximations that are more general and widely accepted outside rendering. A particular set of related techniques have gradually become established in the rendering community.

    The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and practicality, an implementation will be a complex combination of different techniques.

    Rendering research is concerned with both the adaptation of scientific models and their efficient application.

    The rendering equation[edit]

    This is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the non-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations of this equation.

    Meaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport' – all the movement of light – in a scene.

    The bidirectional reflectance distribution function[edit]

    The bidirectional reflectance distribution function (BRDF) expresses a simple model of light interaction with a surface as follows:

    Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can ALSO be BRDFs.

    Geometric optics[edit]

    Rendering is practically exclusively concerned with the particle aspect of light physics – known as geometrical optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction (as seen in the colours of CDs and DVDs) and polarisation (as seen in LCDs). Both types of effect, if needed, are made by appearance-oriented adjustment of the reflection model.

    Visual perception[edit]

    Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate a wide range of light brightness and color, but current displays – movie screen, computer monitor, etc. – cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not need to be given large-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties will not be noticeable. This related subject is tone mapping.

    Mathematics used in rendering includes: linear algebra, calculus, numerical mathematics, signal processing, and Monte Carlo methods.

    Rendering for movies often takes place on a network of tightly connected computers known as a render farm.

    The current[when?] state of the art in 3-D image description for movie creation is the Mental Ray scene description language designed at Mental Images and RenderMan Shading Language designed at Pixar[25] (compare with simpler 3D fileformats such as VRMLorAPIs such as OpenGL and DirectX tailored for 3D hardware accelerators).

    Other renderers (including proprietary ones) can and are sometimes used, but most other renderers tend to miss one or more of the often needed features like good texture filtering, texture caching, programmable shaders, highend geometry types like hair, subdivision or nurbs surfaces with tesselation on demand, geometry caching, raytracing with geometry caching, high quality shadow mapping, speed or patent-free implementations. Other highly sought features these days may include interactive photorealistic rendering (IPR) and hardware rendering/shading.

    Chronology of concepts[edit]

    Rendering of an - ESTCube-1 satellite
  • 1970 - Scanline rendering[27]
  • 1971 - Gouraud shading[28]
  • 1973 - Phong shading[29][30]
  • 1973 - Phong reflection[29]
  • 1973 - Diffuse reflection[31]
  • 1973 - Specular highlight[29]
  • 1973 - Specular reflection[29]
  • 1974 - Sprites[32]
  • 1974 - Scrolling[32]
  • 1974 - Texture mapping[33]
  • 1974 - Z-buffering[33]
  • 1976 - Environment mapping[34]
  • 1977 - Blinn shading[35]
  • 1977 - Side-scrolling[36]
  • 1977 - Shadow volumes[37]
  • 1978 - Shadow mapping[38]
  • 1978 - Bump mapping[39]
  • 1979 - Tile map[40]
  • 1980 - BSP trees[41]
  • 1980 - Ray tracing[42]
  • 1981 - Parallax scrolling[43]
  • 1981 - Sprite zooming[44]
  • 1981 Cook shader[45]
  • 1983 - MIP maps[46]
  • 1984 - Octree ray tracing[47]
  • 1984 - Alpha compositing[48]
  • 1984 - Distributed ray tracing[49]
  • 1984 - Radiosity[50]
  • 1985 - Row/column scrolling[51]
  • 1985 - Hemicube radiosity[52]
  • 1986 Light source tracing[53]
  • 1986 - Rendering equation[54]
  • 1987 - Reyes rendering[55]
  • 1988 - Depth cue[56]
  • 1988 - Distance fog[56]
  • 1988 - Tiled rendering[56]
  • 1991 - Xiaolin Wu line anti-aliasing[57][58]
  • 1991 Hierarchical radiosity[59]
  • 1993 - Texture filtering[60]
  • 1993 - Perspective correction[61]
  • 1993 - Transform, clipping, and lighting[62]
  • 1993 - Directional lighting[62]
  • 1993 - Trilinear interpolation[62]
  • 1993 - Z-culling[62]
  • 1993 - Oren–Nayar reflectance[63]
  • 1993 - Tone mapping[64]
  • 1993 - Subsurface scattering[65]
  • 1994 - Ambient occlusion[66]
  • 1995 - Hidden-surface determination[67]
  • 1995 - Photon mapping[68]
  • 1996 - Multisample anti-aliasing[69]
  • 1997 - Metropolis light transport[70]
  • 1997 Instant Radiosity[71]
  • 1998 - Hidden-surface removal[72]
  • 2000 - Pose space deformation[73]
  • 2002 - Precomputed Radiance Transfer[74]
  • See also[edit]

  • 3D computer graphics – Graphics that use a three-dimensional representation of geometric data
  • 3D rendering – Process of converting 3D scenes into 2D images
  • Artistic rendering – Style of rendering
  • Architectural rendering – creating two-dimensional images or animations showing the attributes of a proposed architectural design
  • Chromatic aberration – Failure of a lens to focus all colors on the same point
  • Displacement mapping – Computer graphics technique
  • Font rasterization – Process of converting text from vector to raster
  • Global illumination – Group of rendering algorithms used in 3D computer graphics
  • Graphics pipeline – Procedure to convert 3D scenes to 2D images
  • Heightmap – Type of raster image in computer graphics
  • High-dynamic-range rendering – Rendering of computer graphics scenes by using lighting calculations done in high-dynamic-range
  • Image-based modeling and rendering
  • List of 3D rendering software
  • Motion blur – Photography artifact from moving objects
  • Non-photorealistic rendering – Style of rendering
  • Normal mapping – Texture mapping technique
  • Painter's algorithm – Algorithm for visible surface determination in 3D graphics
  • Per-pixel lighting
  • Physically based rendering – Computer graphics technique
  • Pre-rendering – Process in which video footage is not rendered in real-time
  • Raster image processor – component used in a printing system which produces a raster image also known as a bitmap
  • Radiosity – Computer graphics rendering method using diffuse reflection
  • Ray tracing – Rendering method
  • Real-time computer graphics – Sub-field of computer graphics
  • Reyes – Computer software architecture in 3D computer graphics
  • Scanline rendering/Scanline algorithm – 3D computer graphics image rendering method
  • Software rendering – Generating images by computer software
  • Sprite (computer graphics) – 2D bitmap displayed on top of a larger scene
  • Unbiased rendering – Type of rendering in computer graphics
  • Vector graphics – Computer graphics images defined by points, lines and curves
  • VirtualGL
  • Virtual model – Form of computer-aided engineering
  • Virtual studio – Technology for television and film production
  • Volume rendering – Representing a 3D-modeled object or dataset as a 2D projection
  • Z-buffer algorithms – Type of data buffer in computer graphics
  • References[edit]

  • ^ https://cedreo.com/blog/sketchup-rendering-plugins/
  • ^ a b c d e Haines, Eric; Shirley, Peter (February 25, 2019). "1. Ray Tracing Terminology". Ray Tracing Gems: High-Quality and Real-Time Rendering with DXR and Other APIs. Berkeley, CA: Apress. doi:10.1007/978-1-4842-4427-2. ISBN 978-1-4842-4427-2. S2CID 71144394.
  • ^ a b c d Akenine-Möller, Tomas; Haines, Eric; Hoffman, Naty; Pesce, Angelo; Iwanicki, Michał; Hillaire, Sébastien (August 6, 2018). "Online chapter 26. Real-Time Ray Tracing" (PDF). Real-Time Rendering (4th ed.). Boca Raton, FL: A K Peters/CRC Press. ISBN 978-1138627000.
  • ^ Cook, Robert L. (April 11, 2019) [1989]. "5. Stochastic Sampling and Distributed Ray Tracing". In Glassner, Andrew S. (ed.). An Introduction to Ray Tracing (PDF). 1.3. ACADEMIC PRESS. ISBN 978-0-12-286160-4.
  • ^ Kajiya, James T. (August 1986). "The rendering equation". ACM SIGGRAPH Computer Graphics. 20 (4): 143–150. doi:10.1145/15886.15902. Retrieved 27 January 2024.
  • ^ a b c d Glassner, Andrew S. (2011) [1995]. Principles of digital image synthesis (PDF). 1.0.1. Morgan Kaufmann Publishers, Inc. ISBN 978-1-55860-276-2.
  • ^ Glassner, Andrew S. (April 11, 2019) [1989]. "1. An Overview of Ray Tracing". An Introduction to Ray Tracing (PDF). 1.3. ACADEMIC PRESS. ISBN 978-0-12-286160-4.
  • ^ "Unity Manual:Light Probes: Introduction". docs.unity3d.com. Retrieved 27 January 2024.
  • ^ "Blender Manual: Rendering: EEVEE: Light Probes: Introduction". docs.blender.org. The Blender Foundation. Retrieved 27 January 2024.
  • ^ "Relativistic Ray-Tracing: Simulating the Visual Appearance of Rapidly Moving Objects". 1995. CiteSeerX 10.1.1.56.830. {{cite journal}}: Cite journal requires |journal= (help)
  • ^ a b Pharr, Matt; Jakob, Wenzel; Humphreys, Greg (March 28, 2023). "1.6". Physically Based Rendering: From Theory to Implementation (4th ed.). Cambridge, Massachusetts: The MIT Press. ISBN 978-0262048026.
  • ^ "Blender Manual: Rendering: Cycles: Introduction". docs.blender.org. The Blender Foundation. Retrieved 27 January 2024.
  • ^ Pharr, Matt; Jakob, Wenzel; Humphreys, Greg (March 28, 2023). "15. Wavefront Rendering on GPUs". Physically Based Rendering: From Theory to Implementation (4th ed.). Cambridge, Massachusetts: The MIT Press. ISBN 978-0262048026.
  • ^ Pharr, Matt; Jakob, Wenzel; Humphreys, Greg (March 28, 2023). "4. Further Reading: Denoising". Physically Based Rendering: From Theory to Implementation (4th ed.). Cambridge, Massachusetts: The MIT Press. ISBN 978-0262048026.
  • ^ "Blender Manual: Rendering: Cycles: Optimizing Renders: Reducing Noise". docs.blender.org. The Blender Foundation. Retrieved 27 January 2024.
  • ^ "Blender Manual: Rendering: Cycles: Render Settings: Sampling". docs.blender.org. The Blender Foundation. Retrieved 27 January 2024.
  • ^ "Intel® Open Image Denoise: High-Performance Denoising Library for Ray Tracing". www.openimagedenoise.org. Intel Corporation. Retrieved 27 January 2024.
  • ^ "NVIDIA OptiX™ AI-Accelerated Denoiser". developer.nvidia.com. NVIDIA Corporation. Retrieved 27 January 2024.
  • ^ Liu, Edward; Llamas, Ignacio; Cañada, Juan; Kelly, Patrick (February 25, 2019). "19: Cinematic Rendering in UE4 with Real-Time Ray Tracing and Denoising". Ray Tracing Gems: High-Quality and Real-Time Rendering with DXR and Other APIs. Berkeley, CA: Apress. doi:10.1007/978-1-4842-4427-2. ISBN 978-1-4842-4427-2. S2CID 71144394.
  • ^ Boksansky, Jakub; Wimmer, Michael; Bittner, Jiri (February 25, 2019). "13. Ray Traced Shadows: Maintaining Real-Time Frame Rates". Ray Tracing Gems: High-Quality and Real-Time Rendering with DXR and Other APIs. Berkeley, CA: Apress. doi:10.1007/978-1-4842-4427-2. ISBN 978-1-4842-4427-2. S2CID 71144394.
  • ^ "Khronos Blog: Ray Tracing In Vulkan". www.khronos.org. The Khronos® Group Inc. December 15, 2020. Retrieved 27 January 2024.
  • ^ a b Tewari, A.; Fried, O.; Thies, J.; Sitzmann, V.; Lombardi, S.; Sunkavalli, K.; Martin-Brualla, R.; Simon, T.; Saragih, J.; Nießner, M.; Pandey, R.; Fanello, S.; Wetzstein, G.; Zhu, J.-Y.; Theobalt, C.; Agrawala, M.; Shechtman, E.; Goldman, D. B.; Zollhöfer, M. (2020). "State of the Art on Neural Rendering". Computer Graphics Forum. 39 (2): 701–727. arXiv:2004.03805. doi:10.1111/cgf.14022. S2CID 215416317.
  • ^ Knight, Will. "A New Trick Lets Artificial Intelligence See in 3D". Wired. ISSN 1059-1028. Retrieved 2022-02-08.
  • ^ Raghavachary, Saty (30 July 2006). "A brief introduction to RenderMan". ACM SIGGRAPH 2006 Courses on - SIGGRAPH '06. ACM. p. 2. doi:10.1145/1185657.1185817. ISBN 978-1595933645. S2CID 34496605. Retrieved 7 May 2018 – via dl.acm.org.
  • ^ Appel, A. (1968). "Some techniques for shading machine renderings of solids" (PDF). Proceedings of the Spring Joint Computer Conference. Vol. 32. pp. 37–49. Archived (PDF) from the original on 2012-03-13.
  • ^ Bouknight, W. J. (1970). "A procedure for generation of three-dimensional half-tone computer graphics presentations". Communications of the ACM. 13 (9): 527–536. doi:10.1145/362736.362739. S2CID 15941472.
  • ^ Gouraud, H. (1971). "Continuous shading of curved surfaces" (PDF). IEEE Transactions on Computers. 20 (6): 623–629. doi:10.1109/t-c.1971.223313. S2CID 123827991. Archived from the original (PDF) on 2010-07-02.
  • ^ a b c d "History | School of Computing". Archived from the original on 2013-12-03. Retrieved 2021-11-22.
  • ^ Phong, B-T (1975). "Illumination for computer generated pictures" (PDF). Communications of the ACM. 18 (6): 311–316. CiteSeerX 10.1.1.330.4718. doi:10.1145/360825.360839. S2CID 1439868. Archived from the original (PDF) on 2012-03-27.
  • ^ Bui Tuong Phong, Illumination for computer generated pictures Archived 2016-03-20 at the Wayback Machine, Communications of ACM 18 (1975), no. 6, 311–317.
  • ^ a b Putas. "The way to home 3d". vintage3d.org. Archived from the original on 15 December 2017. Retrieved 7 May 2018.
  • ^ a b Catmull, E. (1974). A subdivision algorithm for computer display of curved surfaces (PDF) (PhD thesis). University of Utah. Archived from the original (PDF) on 2014-11-14. Retrieved 2011-07-15.
  • ^ Blinn, J.F.; Newell, M.E. (1976). "Texture and reflection in computer generated images". Communications of the ACM. 19 (10): 542–546. CiteSeerX 10.1.1.87.8903. doi:10.1145/360349.360353. S2CID 408793.
  • ^ Blinn, James F. (20 July 1977). "Models of light reflection for computer synthesized pictures". ACM SIGGRAPH Computer Graphics. 11 (2): 192–198. doi:10.1145/965141.563893 – via dl.acm.org.
  • ^ "Bomber - Videogame by Sega". www.arcade-museum.com. Archived from the original on 17 October 2017. Retrieved 7 May 2018.
  • ^ Crow, F.C. (1977). "Shadow algorithms for computer graphics" (PDF). Computer Graphics (Proceedings of SIGGRAPH 1977). Vol. 11. pp. 242–248. Archived from the original (PDF) on 2012-01-13. Retrieved 2011-07-15.
  • ^ Williams, L. (1978). "Casting curved shadows on curved surfaces". Computer Graphics (Proceedings of SIGGRAPH 1978). Vol. 12. pp. 270–274. CiteSeerX 10.1.1.134.8225.
  • ^ Blinn, J.F. (1978). Simulation of wrinkled surfaces (PDF). Computer Graphics (Proceedings of SIGGRAPH 1978). Vol. 12. pp. 286–292. Archived (PDF) from the original on 2012-01-21.
  • ^ Wolf, Mark J. P. (15 June 2012). Before the Crash: Early Video Game History. Wayne State University Press. ISBN 978-0814337226. Archived from the original on 2 May 2019. Retrieved 7 May 2018 – via Google Books.
  • ^ Fuchs, H.; Kedem, Z.M.; Naylor, B.F. (1980). On visible surface generation by a priori tree structures. Computer Graphics (Proceedings of SIGGRAPH 1980). Vol. 14. pp. 124–133. CiteSeerX 10.1.1.112.4406.
  • ^ Whitted, T. (1980). "An improved illumination model for shaded display". Communications of the ACM. 23 (6): 343–349. CiteSeerX 10.1.1.114.7629. doi:10.1145/358876.358882. S2CID 9524504.
  • ^ Purcaru, Bogdan Ion (13 March 2014). "Games vs. Hardware. The History of PC video games: The 80's". Purcaru Ion Bogdan. Archived from the original on 30 April 2021. Retrieved 7 May 2018 – via Google Books.
  • ^ "System 16 - Sega VCO Object Hardware (Sega)". www.system16.com. Archived from the original on 5 April 2016. Retrieved 7 May 2018.
  • ^ Cook, R.L.; Torrance, K.E. (1981). A reflectance model for computer graphics. Computer Graphics (Proceedings of SIGGRAPH 1981). Vol. 15. pp. 307–316. CiteSeerX 10.1.1.88.7796.
  • ^ Williams, L. (1983). Pyramidal parametrics. Computer Graphics (Proceedings of SIGGRAPH 1983). Vol. 17. pp. 1–11. CiteSeerX 10.1.1.163.6298.
  • ^ Glassner, A.S. (1984). "Space subdivision for fast ray tracing". IEEE Computer Graphics & Applications. 4 (10): 15–22. doi:10.1109/mcg.1984.6429331. S2CID 16965964.
  • ^ Porter, T.; Duff, T. (1984). Compositing digital images (PDF). Computer Graphics (Proceedings of SIGGRAPH 1984). Vol. 18. pp. 253–259. Archived (PDF) from the original on 2015-02-16.
  • ^ Cook, R.L.; Porter, T.; Carpenter, L. (1984). Distributed ray tracing (PDF). Computer Graphics (Proceedings of SIGGRAPH 1984). Vol. 18. pp. 137–145.[permanent dead link]
  • ^ Goral, C.; Torrance, K.E.; Greenberg, D.P.; Battaile, B. (1984). Modeling the interaction of light between diffuse surfaces. Computer Graphics (Proceedings of SIGGRAPH 1984). Vol. 18. pp. 213–222. CiteSeerX 10.1.1.112.356.
  • ^ "Archived copy". Archived from the original on 2016-03-04. Retrieved 2016-08-08.{{cite web}}: CS1 maint: archived copy as title (link)
  • ^ Cohen, M.F.; Greenberg, D.P. (1985). The hemi-cube: a radiosity solution for complex environments (PDF). Computer Graphics (Proceedings of SIGGRAPH 1985). Vol. 19. pp. 31–40. doi:10.1145/325165.325171. Archived from the original (PDF) on 2014-04-24. Retrieved 2020-03-25.
  • ^ Arvo, J. (1986). Backward ray tracing. SIGGRAPH 1986 Developments in Ray Tracing course notes. CiteSeerX 10.1.1.31.581.
  • ^ Kajiya, J. (1986). The rendering equation. Computer Graphics (Proceedings of SIGGRAPH 1986). Vol. 20. pp. 143–150. CiteSeerX 10.1.1.63.1402.
  • ^ Cook, R.L.; Carpenter, L.; Catmull, E. (1987). The Reyes image rendering architecture (PDF). Computer Graphics (Proceedings of SIGGRAPH 1987). Vol. 21. pp. 95–102. Archived (PDF) from the original on 2011-07-15.
  • ^ a b c "MAME | SRC/Mame/Drivers/Namcos21.c". Archived from the original on 2014-10-03. Retrieved 2014-10-02.
  • ^ Wu, Xiaolin (July 1991). "An efficient antialiasing technique". ACM SIGGRAPH Computer Graphics. 25 (4): 143–152. doi:10.1145/127719.122734. ISBN 978-0-89791-436-9.
  • ^ Wu, Xiaolin (1991). "Fast Anti-Aliased Circle Generation". In James Arvo (ed.). Graphics Gems II. San Francisco: Morgan Kaufmann. pp. 446–450. ISBN 978-0-12-064480-3.
  • ^ Hanrahan, P.; Salzman, D.; Aupperle, L. (1991). A rapid hierarchical radiosity algorithm. Computer Graphics (Proceedings of SIGGRAPH 1991). Vol. 25. pp. 197–206. CiteSeerX 10.1.1.93.5694.
  • ^ "IGN Presents the History of SEGA". ign.com. 21 April 2009. Archived from the original on 16 March 2018. Retrieved 7 May 2018.
  • ^ "System 16 - Sega Model 2 Hardware (Sega)". www.system16.com. Archived from the original on 21 December 2010. Retrieved 7 May 2018.
  • ^ a b c d "System 16 - Namco Magic Edge Hornet Simulator Hardware (Namco)". www.system16.com. Archived from the original on 12 September 2014. Retrieved 7 May 2018.
  • ^ M. Oren and S.K. Nayar, "Generalization of Lambert's Reflectance Model Archived 2010-02-15 at the Wayback Machine". SIGGRAPH. pp.239-246, Jul, 1994
  • ^ Tumblin, J.; Rushmeier, H.E. (1993). "Tone reproduction for realistic computer generated images" (PDF). IEEE Computer Graphics & Applications. 13 (6): 42–48. doi:10.1109/38.252554. S2CID 6459836. Archived (PDF) from the original on 2011-12-08.
  • ^ Hanrahan, P.; Krueger, W. (1993). Reflection from layered surfaces due to subsurface scattering. Computer Graphics (Proceedings of SIGGRAPH 1993). Vol. 27. pp. 165–174. CiteSeerX 10.1.1.57.9761.
  • ^ Miller, Gavin (24 July 1994). "Efficient algorithms for local and global accessibility shading". Proceedings of the 21st annual conference on Computer graphics and interactive techniques - SIGGRAPH '94. ACM. pp. 319–326. doi:10.1145/192161.192244. ISBN 978-0897916677. S2CID 15271113. Archived from the original on 22 November 2021. Retrieved 7 May 2018 – via dl.acm.org.
  • ^ "Archived copy" (PDF). Archived (PDF) from the original on 2016-10-11. Retrieved 2016-08-08.{{cite web}}: CS1 maint: archived copy as title (link)
  • ^ Jensen, H.W.; Christensen, N.J. (1995). "Photon maps in bidirectional monte carlo ray tracing of complex objects". Computers & Graphics. 19 (2): 215–224. CiteSeerX 10.1.1.97.2724. doi:10.1016/0097-8493(94)00145-o.
  • ^ "System 16 - Sega Model 3 Step 1.0 Hardware (Sega)". www.system16.com. Archived from the original on 6 October 2014. Retrieved 7 May 2018.
  • ^ Veach, E.; Guibas, L. (1997). Metropolis light transport. Computer Graphics (Proceedings of SIGGRAPH 1997). Vol. 16. pp. 65–76. CiteSeerX 10.1.1.88.944.
  • ^ Keller, A. (1997). Instant Radiosity. Computer Graphics (Proceedings of SIGGRAPH 1997). Vol. 24. pp. 49–56. CiteSeerX 10.1.1.15.240.
  • ^ "Hardware Review: Neon 250 Specs & Features". sharkyextreme.com. Archived from the original on 2007-08-07. Retrieved 2021-11-22.
  • ^ Lewis, J. P.; Cordner, Matt; Fong, Nickson (1 July 2000). "Pose space deformation: A unified approach to shape interpolation and skeleton-driven deformation". Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH '00. ACM Press/Addison-Wesley Publishing Co. pp. 165–172. doi:10.1145/344779.344862. ISBN 978-1581132083. S2CID 12672235 – via dl.acm.org.
  • ^ Sloan, P.; Kautz, J.; Snyder, J. (2002). Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low Frequency Lighting Environments (PDF). Computer Graphics (Proceedings of SIGGRAPH 2002). Vol. 29. pp. 527–536. Archived from the original (PDF) on 2011-07-24.
  • Further reading[edit]

    • Akenine-Möller, Tomas; Haines, Eric; Hoffman, Naty; Pesce, Angelo; Iwanicki, Micał; Hillaire, Sébastien (2018). Real-time rendering (4 ed.). Boca Raton, FL, USA.: AK Peters. ISBN 978-1-13862-700-0.
  • Blinn, Jim (1996). Jim Blinn's corner : a trip down the graphics pipeline. San Francisco, Calif.: Morgan Kaufmann Publishers. ISBN 978-1-55860-387-5.
  • Cohen, Michael F.; Wallace, John R. (1998). Radiosity and realistic image synthesis (3 ed.). Boston, Mass. [u.a.]: Academic Press Professional. ISBN 978-0-12-178270-2.
  • Philip Dutré; Bekaert, Philippe; Bala, Kavita (2003). Advanced global illumination ([Online-Ausg.] ed.). Natick, Mass.: A K Peters. ISBN 978-1-56881-177-2.
  • Foley, James D.; Van Dam; Feiner; Hughes (1990). Computer graphics : principles and practice (2 ed.). Reading, Mass.: Addison-Wesley. ISBN 978-0-201-12110-0.
  • Andrew S. Glassner, ed. (1989). An introduction to ray tracing (3 ed.). London [u.a.]: Acad. Press. ISBN 978-0-12-286160-4.
  • Glassner, Andrew S. (2004). Principles of digital image synthesis (2 ed.). San Francisco, Calif.: Kaufmann. ISBN 978-1-55860-276-2.
  • Gooch, Bruce; Gooch, Amy (2001). Non-photorealistic rendering. Natick, Mass.: A K Peters. ISBN 978-1-56881-133-8.
  • Jensen, Henrik Wann (2001). Realistic image synthesis using photon mapping ([Nachdr.] ed.). Natick, Mass.: AK Peters. ISBN 978-1-56881-147-5.
  • Pharr, Matt; Humphreys, Greg (2004). Physically based rendering from theory to implementation. Amsterdam: Elsevier/Morgan Kaufmann. ISBN 978-0-12-553180-1.
  • Shirley, Peter; Morley, R. Keith (2003). Realistic ray tracing (2 ed.). Natick, Mass.: AK Peters. ISBN 978-1-56881-198-7.
  • Strothotte, Thomas; Schlechtweg, Stefan (2002). Non-photorealistic computer graphics modeling, rendering, and animation (2 ed.). San Francisco, CA: Morgan Kaufmann. ISBN 978-1-55860-787-3.
  • Ward, Gregory J. (July 1994). "The RADIANCE lighting simulation and rendering system". Proceedings of the 21st annual conference on Computer graphics and interactive techniques - SIGGRAPH '94. pp. 459–72. doi:10.1145/192161.192286. ISBN 0897916670. S2CID 2487835.
  • External links[edit]


    Retrieved from "https://en.wikipedia.org/w/index.php?title=Rendering_(computer_graphics)&oldid=1229679456"

    Category: 
    3D rendering
    Hidden categories: 
    CS1 errors: missing periodical
    Webarchive template wayback links
    All articles with dead external links
    Articles with dead external links from April 2018
    Articles with permanently dead external links
    CS1 maint: archived copy as title
    Articles with short description
    Short description matches Wikidata
    Articles needing additional references from May 2020
    All articles needing additional references
    All articles with unsourced statements
    Articles with unsourced statements from March 2023
    Articles needing additional references from May 2010
    Articles to be expanded from February 2022
    All articles to be expanded
    Articles using small message boxes
    All articles with vague or ambiguous time
    Vague or ambiguous time from February 2014
    Pages displaying short descriptions of redirect targets via Module:Annotated link
    Pages displaying wikidata descriptions as a fallback via Module:Annotated link
    Commons category link from Wikidata
    Articles with GND identifiers
    Articles with J9U identifiers
    Articles with LCCN identifiers
    Articles with NDL identifiers
     



    This page was last edited on 18 June 2024, at 03:47 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki