Archive For The “Unity Asset Store” Category
Yeah, yeah. I said I would no longer waste blog space on fickle betas and unavailable alpha experiments… But lighting is the single most important aspect of any 3D film – no matter the art style, without highlights and shadows objects have no depth and cannot be visually “anchored” in a scene. Lighting also happens to be the framerate killer for game engines. Each scene light increases the scenes polygon count geometrically. Finding an artistic balance between the fewest possible live scene lights, and an approximation of ambient and reflected bounce lighting or global illumination, is the Holy Grail of realtime 3D…. Here’s the latest maybe…, called HXGI by the author of Hx Volumetric Lighting.
The typical solution is to bake static scene lights and shadows onto a second texture map, either in another program or internally in the game engine. Most game engines also employ reflection probes and a low-res dynamic shadow map to blend over the baked shadows. With the adoption of Lightmass in Unreal and Enlighten in Unity, small scenes can be entirely dynamic with propagated bounce light continuously rebaked on the fly, limited only by the resolution of their shadow maps and speed of the GPU. Very fine-detail shadows that fall below the resolution of the lightmap are handled with an onscreen image-effect ambient occlusion, and some models may have their own AO maps.
One problem however is that animated meshes can’t possibly rebake their maps every frame. A figure walking through the environment, including hair, clothing, and props, can’t update reflective GI maps quickly enough. Static scene objects can be pre-baked or updated every few seconds, but animated figures still require scene lights. A workaround is to use a Light Probes Proxy Volume: an array of point lights that take on a color approximating the bounce light. You can imagine what having dozens of extra scene lights approximating ambient bounce lighting will do to your frame rates. Such an elaborately lit scene would be a chore to create by hand, and cannot be generated procedurally with any strategic efficiency.
Voxel-Based Global Illumination
In old-fashioned ray-traced rendering, the camera traces a vector to each visible surface, and calculates the global illumination for each pixel by following the ray back to the light source(s). Game engines can’t possibly raytrace every pixel in realtime. What’s needed is a way to simplify the process with fewer rays, and bake the lighting values into an accessible “map” of volumetric elements, or voxels. If a rasterized imagemap is created from regularly-spaced pixels, a voxelized scene is represented by regularly-spaced cubes.
The automagic voxelization process subdivides a scene into diminishing cubes, discarding the empty spaces while dividing into smaller and smaller elements. The scale and subdivision of the voxels is adjustable, and not at all dependent on the polygon detail of the scene. Distant objects like mountain terrains can voxelize just as quickly as nearby, detailed models. Finally, each voxel creates a fast look-up reference map of the light in all directions using similarly reduced low resolution light “cones”. The engine generates only the voxels it can see through the camera, and data is interpolated from voxel to voxel, updating as needed. The result is voxel-based global illumination!
With physically based rendering (PBR), every surface material is reflective. The difference between a polished-mirror and matte skin is only in the brightness and blurriness of their reflections. With traditional raytracing realistic blurry reflections are computationally expensive, however voxel GI interpolates blurry reflections with a lower voxel resolution.
Lexie Dostal, creator of HXGI says, “One of the big issues with Enlighten is that dynamic objects are lit by a single interpolated light probe. this can make large dynamic objects look extremely out of place. The way my system works is each fragment (pixel) samples the GI data independently. this means large dynamic objects will be lit correctly.”
A beneficial tool for synchronizing the voice and the lips of your character. This plug-in enables you to by importing the character model with shape blends, easily get the lip-synching results you want. To do this, you just have to play the needed audio through the audio-source connected to Unilip.
Abilities: 1. Real time 2. High speed 3. Ability to adjust the level of stress on specific syllables 4. Adjusting gain-spring and damp for raising the feel of animations 5. Real easy to usd 6. Blinking 7. Eye movement based on the target
Render Monster is the tool for capturing image sequences directly from Unity, for further merging them into video file.
Tool is FPS(frame per second) independent and can capture images at any rate.
Images are saved in lossless PNG format up to 32K (and more, depends on hardware) resolution.
Offline Render is a easy to use, realtime capture plugin for Unity. It allows you to capture the game view to a multi-channel OpenEXR, supporting not just the final output image, but also some common elements, like depth, per-light shadows, diffuse, AO (if present in scene as an Image Effect) and other G-Buffers.
Offline Render allows you to render your Unity scenes and take them into a normal post-production pipeline using your favorite compositing software.
– multi-channel OpenEXR output
– Offscreen rendering
– setup target framerate
– 10 out of the box render elements (Diffuse, Specular, Emission/Lighting, Reflections, Depth, Velocity, Normals, AO, Motion Vectors, ObjectID)
– forward render supported elements (Beauty, Depth, MotionVector, AO)
– 360 render output. Only beauty is supported and renders to PNG.
– Offline Render API for developers – Create your own custom passes
– shadow pass only works with directional lights
– Windows only
The power of CAD architectural design with intuitive “stretch and pull” control handles directly on the models, all driven by powerful geometric and procedural nodes. Archimatix is several evolutionary concepts to swallow at once, but the smartly arranged workspace (2D-3D shapes on the left, manipulators and repeaters on the right) and infectiously ebullient training materials make the new concepts seem like the dawn of a grand adventure.
Immediately you can make geometric tunnels, platforms, and mazes just by extruding 2D shapes and compositing overlaying 2D patterns. In a few hours you’ll work out how to create dynamic rails, bridges, columns, and walls intuitively, just by exploring the node interface. When you do consult the manual, it exudes so much personality and humor it feels like a little reward – a user-friendly design feat in itself.
If Archimatix stopped there it would be an interesting plugin for Unity, but it goes to another level where you can give the models random and deterministic procedural design formulas. Walls fill in with windows, arches span between columns, corners follow special rules, skyscrapers grow to different heights…. Group nodes and expose certain parameters as control handles, then save them to the 3D library to re-use as dynamic “intelligent” models. It’s a game changer.
You’re not limited to architecture of course. How about a control panel that fills in with knobs/buttons? A table that adds more plates and chairs. A bookshelf that adds books… It’s not hyperbole to say the possibilities are endless!
PARAMETRIC MODELING – Create 3D models from 2D shapes using extrude, lathe, path sweep, etc. Combine and merge 2D composite shapes, and manipulate the resulting 3D meshes directly in Unity’s scene view by dragging outlines and control points.
PROCEDURAL NODES – Individual node settings provide granular control over smoothness, bevels, offset, and thickness. UV materials and collider types are handled automatically, and can be tweaked. Archimatix even creates secondary UV maps for light baking!
2D / 3D REPEATERS – Grid repeaters, line repeaters, radial repeaters… turn a column into a hall, a step into stairs, a post into a fence, a panel into a room, a wall into a multi-story building, a house into a village…. Add Unity prefabs to Archimatix repeaters to iterate particle systems, lights, 3rd-party meshes, even NPC’s inside your Archimatix models.
NODE LIBRARY – Save and re-use entire procedural node groups by editing and swapping nodes from the libraries. Make drastic changes to a floorplan while preserving style details like walls and rails, or instantly propagate a change throughout repeated architecture non-destructively. Buildings, props, entire game levels can evolve organically from pre-viz volume placeholders to final design, remaining editable and dynamic the whole time.
“INTELLIGENT” MODELS – Assign formulas and spatial relationships between various parts of the model. Add random jitter, scale, and rotation to arrays, or create flexible rules for repeaters. Dynamic models can be stamped into prefabs, creating an entire “city” of buildings or a variety of props that follow the same design rules. A building can have a ground floor and roof with any number of repeated floors in between, including connecting staircases.
CURRENT LIMITATIONS but FUTURE POTENTIAL –
• An API is promised that will allow dynamic models to be manipulated at runtime (Archimatix will become not just an advanced modeler but a revolutionary in-game mechanic!).
• no Deformers or 3D Boolean nodes yet. You’ll need other tools to accomplish these tasks.
• no OBJ Export… Trivial to use a free exporter for Unity, but you will wonder why Archimatix is “just” a Unity plugin. Seriously, I would have paid the same price (or more) for stand-alone software! With a few tweaks Unity is a plugin for Archimatix!
• no SVG/vector/font Import. For now we have to use the Freeform 2D shape tool or “turtle” script. Nodes for importing 2D data shouldn’t be difficult. Expect them in updates.
• The 3D Library jumps from a few basic primitives to full blown Italian villas, LOL! It leaves a pregnant gap of potential add-ons, in all genres and styles. An AX file exchange and 3rd-party artist support could make AX into a design platform, not just a modeler.
The new Unity post-processing stack is an über effect that combines a complete set of image effects into a single post-process pipeline. This has a few advantages :
– Effects are always configured in the correct order.
– It allows combination of many effects into a single pass.
– It features an asset-based configuration system for easy preset management.
– All effects are grouped together in the UI for a better user experience
It comes with the following effects :
– Antialiasing (FXAA, Temporal AA)
– Ambient Occlusion
– Screen Space Reflections
– Depth of Field
– Motion Blur
– Eye Adaptation
– Color Grading
– User Lut
– Chromatic Aberration
The stack also includes a collection of monitors and debug views to help you set up your effects correctly and debug problems in the output.