cialis soft 20mg

Jump to content

generic dapoxetine priligy
Welcome to 3DHIT, the forum for UHAnimation - Digital Animation 2D | 3D | VFX | Games Art at the University Of Hertfordshire.
If you are new to the forum please register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
avana dapoxetine

Jo's alumni tech stuff

- - - - - Technical art games Tech

  • You cannot reply to this topic
9 replies to this topic

#1
Sketchie

Sketchie

    Look Ginger! A clue!

  • 303 posts
  • Gender:Male
  • Location:a lonely planet spinning its way toward damnation

  • Work Thread

  • Company: Beard Envy

Figured I'd start a little collection of tech musings for people interested, spent the last few days with unreal's render targets and am loving it. The documentation on render targets isn't hugely great but unreal's resident tech artist has a pretty great blog that I've been s̶t̶e̶a̶l̶i̶n̶g borrowing things from here.

 

Warning, lots of gifs ahead

 

I've been working on a slime painter that works at runtime in unreal, final product:

 

7a0186645e2c65efde9318e1cf07ea03.gif

 

So... explanation time, the slime that is "growing" on stuff is actually a second mesh hidden underneath the main geometry, the shader takes the mask that is being painted onto a render target to change opacity and vertex offsets, the mesh has smoothed normals (and the vertices are moved along their respective normals) so that it won't break into chunks when the vertices get pushed around, which they otherwise would because UV islands are the worst.

 

I'm using quite a few interesting things make this work, so I'll have a go at explaining every step.

 

here's the mesh in question:

 

SrwcICs.png

 

and here's its UVs:

 

QpXR2vh.png

 

The most important thing to realise about render targets is that they can only deal with materials that calculate everything within the material, that's probably not the best way of describing it (and may not be technically correct) but it's how I understand them (if you know the correct details please elaborate). Essentially this means that you can't use nodes like world position, object position, vertex normals ect. Or in other words, if the material wouldn't work correctly when put on a plane then it won't work on your object.

 

Obviously I'm using world coordinates for this effect even though I just said that that doesn't work, so the vast majority of the work in this is finding another way to use world coordinates or at least a way of faking it. For the first part of this I shamlessly copied how Ryan Brucks did it here, including the automated UV dilation that makes an excellent positional map (although it is only in relative bounds). 

 

The shader used is this:

 

nMfyGTq.png

 

and results in this:

 

soLtokB.png

 

except that it doesn't. This material wouldn't output anything meaningful to a render target, so how do we fix that? in order to get the image above I use a scene capture component rendering to a render target that is set to be the exact size as the output size from the resultant shader (conveniently 1000), if this is done in an empty scene the alpha channel is also hugely useful; all the black areas are solid 1 and everything else is 0, this is used for the second bit of wizardry that I steal from the site linked above.

 

Because I want this texture to mip nicely and make UV seams not a thing it needs to be dilated, I'm not going to claim to understand any of this step, but I know that it works, the following shader setup is used, note that the mapDilate node is a custom node running the code segment from that site which I'll also paste below (albeit with an added colon on one of the lines).

 

78rzVnt.png

//////////////// UV Positional Dilation ///////////////////////////
//** Tex **// Input Texture Object storing Volume Data
//** UV **// Input float2 for UVs
//** TextureSize **// Resolution of render target
//** MaxSteps **// Pixel Radius to search


float texelsize = 1 / TextureSize;
float mindist = 10000000;
float2 offsets[8] = {float2(-1,0), float2(1,0), float2(0,1), float2(0,-1), float2(-1,1), float2(1,1), float2(1,-1), float2(-1,-1)};

float3 sample = Tex.SampleLevel(TexSampler, UV, 0);
float3 curminsample = sample;

if(sample.x == 0 && sample.y == 0 && sample.z == 0)
{
    int i = 0;
    while(i < MaxSteps)
    { 
        i++;
        int j = 0;
        while (j < 8)
        {
            float2 curUV = UV + offsets[j] * texelsize * i;
            float3 offsetsample = Tex.SampleLevel(TexSampler, curUV, 0);

            if(offsetsample.x != 0 || offsetsample.y != 0 || offsetsample.z != 0)
            {
                float curdist = length(UV - curUV);

                if (curdist < mindist)
                {
                    float2 projectUV = curUV + offsets[j] * texelsize * i * 0.25;
                    float3 direction = Tex.SampleLevel(TexSampler, projectUV, 0);
                    mindist = curdist;

                    if(direction.x != 0 || direction.y != 0 || direction.z != 0)
                    {
                        float3 delta = offsetsample - direction;
                        curminsample = offsetsample + delta * 4;
                    }
                   else
                    {
                        curminsample = offsetsample;
                    }
                }
            }
            j++;
        }
    }
}

return curminsample;

When this gets rendered onto a render target it makes this glorious piece:

 

wDzU0II.png

 

Now is where I can claim to have actually made stuff, this mask is based entirely along object bounds so I can't use it as a world position mask which is what I need, so I had to figure out how to convert it, the answer is obviously more render targets. Frustratingly the object position and object bounds nodes return different things in the blueprint editor vs the material editor, but because I'm using render targets they don't matter anyway, all of those variable have to be handled in a blueprint. The material is as follows, the variables themselves are decided in the object blueprint.

 

NbrWOEi.png

 

RI1U3rl.png

 

because the slime mesh is static I only need to run this once, if you're using a similar method on a moving actor you'll need to re-render this before rendering the next step.

 

So now we have the meshes absolute world position in the form of a texture, all this just to avoid using a world position node... ah well, final stretch now, onto the actual painting material, again very simple, all the variables are handled in the blueprint:

 

nOYyeB3.png

 

blueprint for painting:

 

n7cAysC.png

 

paint is just triggered by a mouse location trace.

 

If anyone has any tips/tricks that I should know it'd be greatly appreciated, still learning a lot.

or if you have any crit then fire ahead.

 

And a final treat, here's the painter in action again, but with the raw render target in view above so you can get a slightly better idea as to what's going on:

 

26f4e7228bdc89409e4f98df69350435.gif

 

Cheers


  • conradam likes this

#2
tweedie

tweedie

    3rd Year Games Art

  • 261 posts
  • Gender:Male

  • Work Thread

As you're authoring the smoothed mesh before runtime, rather than procedurally creating it from the level layout, why not just bake a world space position map in something like Substance painter? Or was this just a learning exercise? Either way, looks cool, thanks for the detailed write up :)

#3
Sketchie

Sketchie

    Look Ginger! A clue!

  • 303 posts
  • Gender:Male
  • Location:a lonely planet spinning its way toward damnation

  • Work Thread

  • Company: Beard Envy

As you're authoring the smoothed mesh before runtime, rather than procedurally creating it from the level layout, why not just bake a world space position map in something like Substance painter? Or was this just a learning exercise? Either way, looks cool, thanks for the detailed write up :)

 

I don't actually have substance painter installed at the moment, didn't even know it could bake world space position maps, I guess I also like the idea of having it all done in one package, do you know of a way to generate the mesh within unreal? this was essentially made as a proof of concept and a learning exercise to familiarize myself more with rendertargets, with my final idea for authoring the mesh to be exporting the entire level geometry into z-brush and dynameshing it into one thing, then cutting it onto manageable cubes. Need to actually test it on some kind of large scale to see if it's even viable though, the slime shader is as cheap as I could make it, so it shouldn't be too expensive to lay on everything, I guess the next step is to look at texture atlasing so that the render targets don't eat up all of the gpu memory.



#4
tweedie

tweedie

    3rd Year Games Art

  • 261 posts
  • Gender:Male

  • Work Thread

I'm not sure how you could do it through blueprint, I can imagine it would be a bit of a mess too lol. Painter can do position maps, so even if the mesh wasn't kept at the origin, it'd only be one vector operation away from being world space, so that could be preprocessed. Re: mesh generation, I think your current plan is probably the simplest way of doing things. It's a shame that each mesh can't just have a duplicate of itself within the fbx so that the material that renders the actual object can also handle the slime, masking the vertex displacement by vertex colours on the duplicate (hope that makes sense), as that would mean no extra meshes have to be sent to the GPU and objects could be culled as normal, but as you mentioned previously I guess it would cause creasing / obvious seams in the slime. Atlasing sounds like fun, out of curiosity how high res are you going with the render targets? I would've thought you could get away with them being fairly low as slime is pretty low frequency. Cool test case for a learning experiment / proof of concept though, I look forward to updates :)

#5
Sketchie

Sketchie

    Look Ginger! A clue!

  • 303 posts
  • Gender:Male
  • Location:a lonely planet spinning its way toward damnation

  • Work Thread

  • Company: Beard Envy

As a bit of a tangent I've been working on a flipbook baker, for some caustics that I've rendered out, previously I've compiled these flipbooks in photoshop which takes some time and is needlessly precise, so I figured I'd make a baker that can take an array of images and turn them into a flipbook, while I was at it I decided to fix a common problem with flipbooks - tiling, or more specifically lack of dilation when tiling. 

 

The first incarnation of the baker leaves some serious artifacts, this is also present if you compile it in photoshop:

 

c9f5dd93929b53682bff7e202583c8c5.gif

 

Obviously I've zoomed in far enough for this to be very obvious, so ignore the pixelisation, the result we're looking for is this:

 

a472887ee23b0107857f0a168e27522f.gif

 

there is still a few points where you can notice the seams, but only if you're looking for them, again, zoomed in, so from a reasonable distance this is pretty unnoticeable.

 

So how does this work?

 

essentially each frame needs to be scaled down a few pixels and the tiled with itself along the edges, this is a perfect method of dilation provided that the textures are then read from the scaled down area, that way any mipping will always result in seamless compression. This is pretty easy to do to a single texture, but in a flipbook each one of the frames needs to be scaled down relative to its position on the flipbook. Turns out that this is pretty easy too, I just work one texture at a time and add them to the render target.

 

Here's the material I came up with for each individual frame, I don't want to claim that this is the only way of doing it, nor that it's the best way, but it does work:

 

NO2bDZE.png

 

I'll have a go at explaining what the major things happening here are. First I need to multiply the texture coordinate so that it tiles the desired amount of times, this should be the square root of the total amount of frames (this method will break if you have a number of frames that doesn't split into an integer when square rooted, but you shouldn't do that in a flopbook anyway). the next step is to make it so that the current frames location is displayed on the 0-1 range of the UVs (this isn't really necessary, but it makes some math later easier). This is done with the bit of math coming from the frame parameter; Fmod gets the x location, dividing and flooring gets the y location (this assumes that frames are numbered 0-15 for a 16 frame array, which they should be). 

 

Now I need two values, the UV co-ord of the top left of the shrunken image and the one of the bottom right, ideally these should be perfect pixel locations, but they don't need to be. Writing this has made me realise that my math is in fact wrong, but it still works... whoops. So, dividing the render target resolution by the rootFrames value gives the resolution of the individual frame, 1 divided by that gives the distance you have to move on the UVs to move a single pixel. So, adding that value to 1 will tile the texture out by 1 pixel in the positive U and V direction, and subtracting that value from 0 will tile it 1 pixel in the negative U and V. I multiply that value first so that the dilation can be adjusted dynamically, tiling it that many pixels. Those two values then need to be turned into float2s so I just append them to themselves. for the sake of simplicity we'll call those 1.1 and -0.1, now if I take the lerp node and plug 1.1 into the 1 value and -0.1 into the 0 value it will stretch the texture relative to the UVs I've already made, this is why it's important that the target frame location is between 0 and 1.

 

All that needs to be done next is to mask that location away so that I can just run a loop through each texture rendering to one rendertarget in order to prevent them from overlapping with themselves. This is the bit that I'm certain I'm doing in a far too roundabout way, so if you have a better method let me know. Essentially I use if nodes on the U and V channel separately to make below 0 equal 0 and above 1 equal 0, then multiply the four results so that I'm left with a square in the desired location, multiply that by the dilation and tadah! that frame is done.

 

The flipbook material just needs to apply the same offsets (again just using a lerp):

 

oiD6Vbi.png

 

This material is also referenced in the baker with all the same variables linked up, this lets me see the necessary dilation required for various view distances.

 

Hopefully this is useful to someone, I feel like more people will be using tiling flipbooks than world painting materials.

 

any tips/correction/criticism, feel free to call me out


Edited by Sketchie, 18 July 2017 - 06:59 PM.


#6
chrisdunham95

chrisdunham95

    VFX Artist

  • 710 posts
  • Gender:Male
  • Location:Nottingham

  • Work Thread

  • Company: Rare

Are you reffering to houdini flipbooks? (or flipbooks as in a general book of textures / sequence of images) 


Edited by chrisdunham95, 18 July 2017 - 05:15 PM.


#7
Sketchie

Sketchie

    Look Ginger! A clue!

  • 303 posts
  • Gender:Male
  • Location:a lonely planet spinning its way toward damnation

  • Work Thread

  • Company: Beard Envy

I'm refering to the texture atlases that unreal uses for animated textures, here's the resultant one from this method with the 2 pixel dilation added: 

 

R7uA4hu.jpg

 

 

the flipbook material node divides it up and reads it chunk at a time, so if you were to plug this in to that you'd get the caustics pattern with some crudely drawn numbers over it in ascending order.

 

No idea if that's how Houdini flipbooks work, I'm yet to dive into Houdini.


  • chrisdunham95 likes this

#8
chrisdunham95

chrisdunham95

    VFX Artist

  • 710 posts
  • Gender:Male
  • Location:Nottingham

  • Work Thread

  • Company: Rare

Ahhh i see, yeah similar but different that makes sense - my thought was you could build this in houdini and export without having the patch it all back together but im not particualarly knowledgeable on the technicals so i could be just talking rubbish ahha - looking cool thooo man 



#9
DigitalSalmon

DigitalSalmon

    Alumni

  • 1253 posts
  • Gender:Male
  • Location:St Albans, UK

  • Work Thread

If you're picking a naming convention for material parameters

 

[Title] PropertyName

 

would be what I'd go with.

If you're keen on getting into this stuff - have a go at the 'custom' node. hlsl is ones of the easiest languages to read/write, and your node graphs will get much cleaner.

 

The edge artifacts looks to me like an issue the with the wrap mode of the texture you're sampling to generate the sheet.

Doing great stuff, keep at it (:



#10
Sketchie

Sketchie

    Look Ginger! A clue!

  • 303 posts
  • Gender:Male
  • Location:a lonely planet spinning its way toward damnation

  • Work Thread

  • Company: Beard Envy

Okay, time to update this with a conundrum I've been working on today, in a project I'm working on there are some TV screens scattered about the level that when the player approaches change to display an image of a "guide" helping the player through the game, simple enough, but I was asked to make a setup that would enable nearby clusters of screens to display the same image on them but spread among the TVs from that cluster. Here's an example, my explanation kinda sucks, Single TV vs TV cluster with Ben's face being the temporary guide:

 

9kAcetB.png

 

So the group of TVs simply makes the image larger and spread over all of the screen estate. My first attempt at a solution to this revolved around using world projection and offsets, which while it worked sometimes, didn't work at all if the TVs were facing a direction other than the XY or Z axis. It also made it a little bit of a pain to set up as the TVs would have to be placed and then offsets figured out with scaling taken into account too, so it was way too much effort for such a limited effect.

 

My solution: use two points in world space and draw the UV coordinates between them, while technically you need three points to define a plane I only want two bits of rotational information so two points will have to do. Also, all of the math for it can very easily be put into a material function so it's very easy to reuse should anything else need something like it.

 

Here's how it works:

 

PaTZoL3.png

 

The input information is incredibly simple, needing only the coords of the top left corner and the bottom right (although top right and bottom left would work fine too) and the world position information.

 

The output UV coordinates are obviously a float2, with the V axis being incredibly easy to figure out, simply subtracting the z value of the lower coord from the world position will set that point to zero, then dividing it by the difference in height between the two points makes it go between zero and one, I use a lerp to invert it rather than a frac and a oneMinus because of another node in the TV material that was producing rounding errors.

 

The U axis is far more of a pain to figure out, but at this point is technically only turning two dimensions into one, so if you think about it as a linear gradient it makes a lot more sense (and in fact it is simply gradient math being used to convert it in this example). Here is the math in question (treating the R and G channels of the points as X and Y, x2 and y2 being the top point and x1 and y1 being the bottom point, world x and y are simply x and y):

A = (x2 - x1)
B = (y2 - y1)
c1 = (A * x1) + (B * y1)
c2 = (A * x2) + (B * y2)

// c2 should always be larger than c1

C = (A * x) = (B * y)

output (C - c1)/(c2 - c1)

Ignore the multiplying by one bit in the nodes, that was when I was looking at building a gradient with output colours other than 0 and 1, which is obviously all that you need for UV coords.

 

Hopefully this made some sense, now all the level designer has to do is assign a group of TVs to a cluster and position the top left and bottom right points in the editor, the material adjusts in real time so it's very easy to make sure that it looks right.

 

As always, questions/criticisms feel free to fire away


Edited by Sketchie, 30 August 2017 - 04:05 PM.

  • Josh203 and Christian Fryer like this





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users

buy antibiotics online uk