1. Post #521
    Gold Member
    paindoc's Avatar
    March 2009
    9,414 Posts
    I find it really strange that given the amount of things GLM supports and its ubiquity that it doesn't have a preprocessor flag like "#define GLM_USING_QT" or something that allows for conversion to/from Qt matrix/vector types. Same with Qt not supporting GLM in some fashion, be it through a preprocessor define or something. It adds an odd bit of bloat and boilerplate interface code between parts of my application, and it gets verbose when I have to use lots of calls to glm::value_ptr and such

    its almost driving me to create my own vector type that I use in the relevant code, so that I can define conversions from the other two types in my own struct. but then i have three seperate vector types in my code, which just seems silly (and probably isn't worth the potential issues).

    Edited:

    I also can't help but appreciate GLM more and more all the time, its certainly not a fancy or attention-grabbing library but its absolutely bulletproof and does everything you need it to do, even without the GTX extensions

  2. Post #522
    Gold Member
    Karmah's Avatar
    December 2007
    6,890 Posts
    GLM is awesome

    I too have redundant vector and matrix classes with glm::mat4x4, aiMatrix4x4, and btMatrix4x4; I'm sure I even have a fourth one hiding somewhere else.
    Reply With Quote Edit / Delete Reply Windows 10 Chrome Canada Show Events Informative Informative x 1 (list)

  3. Post #523

    May 2016
    187 Posts
    whyyy not just define conversion operators?

  4. Post #524
    Gold Member
    paindoc's Avatar
    March 2009
    9,414 Posts
    whyyy not just define conversion operators?
    can't do that outside of the scope of the original class declaration/definition afaik. so I'd have to go tinker with GLM/Qts source code for these items, and then I have to make sure to package my edits with my project and so on. And make sure that the Qt elements are included in the right GLM elements, and vice versa. OpenGL stuff can get tangled too because then you end up with multiple definitions pretty easily, if you're not really careful.

    Edited:

    i mean correct me if I'm wrong, because just defining a conversion operator would be great

  5. Post #525
    Gold Member
    Karmah's Avatar
    December 2007
    6,890 Posts
    All I do is use GLM as the standard in my application, and only use the others when absolutely necessary.
    Like when I'm updating my own objects at the end of a physics tick, I grab their transformation matrix and convert it to glm's mat4. Or when I'm grabbing animations from an assimp model, I'll convert them to GLM.

  6. Post #526
    cam64DD's Avatar
    November 2008
    949 Posts
    I've added ratings to my facepunch clone game about internet forums!


    Reply With Quote Edit / Delete Reply Windows 10 Chrome Show Events Winner Winner x 10Friendly Friendly x 1Optimistic Optimistic x 1 (list)

  7. Post #527

    May 2016
    187 Posts
    can't do that outside of the scope of the original class declaration/definition afaik. so I'd have to go tinker with GLM/Qts source code for these items, and then I have to make sure to package my edits with my project and so on. And make sure that the Qt elements are included in the right GLM elements, and vice versa. OpenGL stuff can get tangled too because then you end up with multiple definitions pretty easily, if you're not really careful.

    Edited:

    i mean correct me if I'm wrong, because just defining a conversion operator would be great
    Oh you're right. That's actually pretty sucky.
    I guess I'd go with custom vector type then, just for the conversions, an implementation of the adapter pattern if you will. If you define conversions for that type from and to all the other corresponding vector types, you could just use it everywhere in your code and just plug it into the glm algorithms as well as the QT stuff. Seems pretty reasonable to me.

    I don't really see an issue with having multiple vector types, think of it this way, your custom type is the main one for your application, glm provides you with algorithms and well QT its a lib you use so it's pretty normal that it defines some things that you already do. Code duplication is not inherently evil, it sometimes has reasonable justifications like in this case.

  8. Post #528
    www.bff-hab.de
    DrDevil's Avatar
    May 2006
    3,308 Posts
    I've added ratings to my facepunch clone game about internet forums!


    I'm legitimately interested to see the game mechanic you make out of that.

  9. Post #529
    Edvinas's Avatar
    August 2010
    871 Posts
    Trying to simulate soft body physics with Box2d
    Reply With Quote Edit / Delete Reply Windows 10 Firefox Lithuania Show Events Optimistic Optimistic x 11Funny Funny x 4Artistic Artistic x 2 (list)

  10. Post #530
    Fourier's Avatar
    July 2014
    4,010 Posts
    I think it was 1x1x1, oriented with the Z axis going through the middle.
    Ok thank you :). Gonna post results (will take some time tho)

  11. Post #531
    Gold Member
    paindoc's Avatar
    March 2009
    9,414 Posts
    Vulkan continues to be fun, if your idea of fun is slightly masochistic and involves getting to (or having to, depending on your opinion of it lol) explicitly control something as detailed as GPU memory allocation, reallocation, and free'ing. I was about to start rendering a triangle, but then I found out that Vulkan gives you so much control that I might as well have fun implementing a circular buffer to use for "staging memory" right now, and texture streaming down the road (if it works out, or makes sense). But then I also realized that I'd probably get fragmentation, or at least inefficient use of the buffers space, if I didn't take time to make sure that things were optimally arranged. And Vulkan expects everything to be aligned, so you have to make sure that (if shifting things around to defrag/optimize, for example) memory addresses are all aligned.

    This alignment changes based on the type of data being used in the buffer too, which might be because allocating for an image could explicitly use texture memory on the GPU whereas something like uniform buffers or mesh data can reside in one of the caches (if available) or in constant memory or something I dunno. It is neat that you can easily get things like the max VRAM size, or the default alignment, or the max number/size of allocations for each GPU in a system and that these values can be easily fetched at runtime or compile-time. I can't think of way you could do that in OpenGL, and you never really had to even think about managing memory usage like this in OpenGL, but I can see how it could be really quite useful for automatically configuring the memory layout/usage depending on the hardware running your program. So writing a circular buffer quickly meant also writing a defragmenter that I can run in-between frames (I think, I'm still horribly new at trying to do fancy rendering stuff) so I don't end up with a shitload of pointlessly wasted space in my buffer. I'm also imaging that there is just no way I can ever outdo the driver when it comes to this stuff, either. I've dealt with allocating GPU resources before with CUDA but that was literally as simple as using something like cudaMalloc, cudaMemset, cudaFree, etc, not something that makes me feel like the worlds worst GPU driver programmer.

    my head's all mush though, and I wish I had more to show for this effort, but I've been super busy at work and more than a little tense as we wait at work to negotiate a really big programming contract. I'm not sure how much I can disclose, but its pretty much in the vein of work I'd like to do for a career and it'd be awesome to have on my portfolio. So, I've been practicing other things in C++ that would be related to this potential project and haven't had time to work on vulkan ;c

    downside of aforementioned contract: the codebase terrifies our C++ guru and he asked me if I was okay trying to read comments in a foreign language, so uh yeah this could be interesting

  12. Post #532
    Gold Member
    TH3_L33T's Avatar
    June 2006
    1,574 Posts
    So just messing around I thought how much of a difference would a for loop have if the array.Length was defined as for(int i = 0, len = array.Length; i < len; i++). I am sure its been done before but wanted to do it myself and thought someone else might find it interesting too. This was done in C# by the way.
    The loops looked like the following -

    Type 1.
    Code:
    for (int x = 0; x < arr.Length; x++)
    {
        ...
    }
    Type 2.
    Code:
    for (int x = 0, len = arr.Length; x < len; x++)
    {
        ...
    }
    Here is charts showing the data, goes from an array size of 100 to 2 million. Each array size was looped over 1000 times for both types and the average was taken.




    Just found it interesting and thought I would post it.
    Reply With Quote Edit / Delete Reply Windows 10 Firefox United States Show Events Informative Informative x 7Agree Agree x 2 (list)

  13. Post #533
    Gold Member
    Killowatt's Avatar
    September 2009
    3,775 Posts


    Voxel planet part three-hundred made in the fantastic Unreal Engine 4™

    Gonna add that greedy meshing and networked levels next probably
    Reply With Quote Edit / Delete Reply Windows 10 Chrome United States Show Events Artistic Artistic x 2 (list)

  14. Post #534
    AtomiCal's Avatar
    December 2005
    729 Posts
    So just messing around I thought how much of a difference would a for loop have if the array.Length was defined as for(int i = 0, len = array.Length; i < len; i++). I am sure its been done before but wanted to do it myself and thought someone else might find it interesting too. This was done in C# by the way.
    The loops looked like the following -

    Type 1.
    Code:
    for (int x = 0; x < arr.Length; x++)
    {
        ...
    }
    Type 2.
    Code:
    for (int x = 0, len = arr.Length; x < len; x++)
    {
        ...
    }
    I guess .Length is a property with some logic built into the getter and len is just a local variable on the stack, which is super fast to access.

    Interesting to see the comparison though.

  15. Post #535
    Fourier's Avatar
    July 2014
    4,010 Posts
    Vulkan continues to be fun, if your idea of fun is slightly masochistic and involves getting to (or having to, depending on your opinion of it lol) explicitly control something as detailed as GPU memory allocation, reallocation, and free'ing. I was about to start rendering a triangle, but then I found out that Vulkan gives you so much control that I might as well have fun implementing a circular buffer to use for "staging memory" right now, and texture streaming down the road (if it works out, or makes sense). But then I also realized that I'd probably get fragmentation, or at least inefficient use of the buffers space, if I didn't take time to make sure that things were optimally arranged. And Vulkan expects everything to be aligned, so you have to make sure that (if shifting things around to defrag/optimize, for example) memory addresses are all aligned.

    This alignment changes based on the type of data being used in the buffer too, which might be because allocating for an image could explicitly use texture memory on the GPU whereas something like uniform buffers or mesh data can reside in one of the caches (if available) or in constant memory or something I dunno. It is neat that you can easily get things like the max VRAM size, or the default alignment, or the max number/size of allocations for each GPU in a system and that these values can be easily fetched at runtime or compile-time. I can't think of way you could do that in OpenGL, and you never really had to even think about managing memory usage like this in OpenGL, but I can see how it could be really quite useful for automatically configuring the memory layout/usage depending on the hardware running your program. So writing a circular buffer quickly meant also writing a defragmenter that I can run in-between frames (I think, I'm still horribly new at trying to do fancy rendering stuff) so I don't end up with a shitload of pointlessly wasted space in my buffer. I'm also imaging that there is just no way I can ever outdo the driver when it comes to this stuff, either. I've dealt with allocating GPU resources before with CUDA but that was literally as simple as using something like cudaMalloc, cudaMemset, cudaFree, etc, not something that makes me feel like the worlds worst GPU driver programmer.

    my head's all mush though, and I wish I had more to show for this effort, but I've been super busy at work and more than a little tense as we wait at work to negotiate a really big programming contract. I'm not sure how much I can disclose, but its pretty much in the vein of work I'd like to do for a career and it'd be awesome to have on my portfolio. So, I've been practicing other things in C++ that would be related to this potential project and haven't had time to work on vulkan ;c

    downside of aforementioned contract: the codebase terrifies our C++ guru and he asked me if I was okay trying to read comments in a foreign language, so uh yeah this could be interesting
    You can implement circular buffers on textures though, can't you? 512x512 texture can hold 512 ~512dimensional circular buffer for example. (slightly less than 512 dimension because you need some space for circular buffer control variables).

  16. Post #536
    suXin's Avatar
    July 2009
    1,559 Posts
    So I found out node.js now supports async\await out of box and decided to try it, only to find out that my typescript node project transpiles async\await by itself and upgrading was not required

    Anyway I failed to rewrite one of my endpoints to use async\await. I'm not sure if it's just the feature is restrictive or my architecture doesn't fit, but I needed nested async\await calls.

  17. Post #537
    Click for bunny <3
    MattJeanes's Avatar
    September 2010
    1,478 Posts
    So I found out node.js now supports async\await out of box and decided to try it, only to find out that my typescript node project transpiles async\await by itself and upgrading was not required

    Anyway I failed to rewrite one of my endpoints to use async\await. I'm not sure if it's just the feature is restrictive or my architecture doesn't fit, but I needed nested async\await calls.
    Async/Await in JS is just basically a fancy wrapper around Promises, an async function returns a promise and you can await other promises inside of it.
    Reply With Quote Edit / Delete Reply Windows 10 Chrome United Kingdom Show Events Agree Agree x 1 (list)

  18. Post #538
    Gold Member
    Tamschi's Avatar
    December 2009
    8,679 Posts
    So I found out node.js now supports async\await out of box and decided to try it, only to find out that my typescript node project transpiles async\await by itself and upgrading was not required

    Anyway I failed to rewrite one of my endpoints to use async\await. I'm not sure if it's just the feature is restrictive or my architecture doesn't fit, but I needed nested async\await calls.
    You can probably tell the TypeScript compiler to use a newer ES version to make it emit it directly.
    Reply With Quote Edit / Delete Reply Windows 10 Firefox Germany Show Events Agree Agree x 1 (list)

  19. Post #539
    Gold Member
    paindoc's Avatar
    March 2009
    9,414 Posts
    You can implement circular buffers on textures though, can't you? 512x512 texture can hold 512 ~512dimensional circular buffer for example. (slightly less than 512 dimension because you need some space for circular buffer control variables).
    sorry, I'm not quite sure what you mean? Most of the memory allocation and such calls have the VKAPI flag, so I'm pretty sure they get called by the GPU itself and such, so not sure how control variables will work. The idea is to allocate a large chunk of space (based on the hardware I'm running on), and use that to stream textures. One of the problems I'm realizing I'm going to have is texture compression, because the raw output from my compute jobs isn't going to be compressed and idk if I'm going to be able to compress it in time. Texture streaming is down the road though. Like, yknow, after I make a triangle appear.

    For now, its going to be useful as a "FIFO" type queue to help stage resources. Its a common approach that I've found several people using when I was browsing /r/vulkan, just that no one uses the term "circular buffer", even if they're effectively using one. Its not a very well-known structure, it would seem.

    Also, if anyone's interested in trying out vulkan, I highly recommend vulkan.hpp as the main include file (its packaged w/ the API now, alongside vulkan.h) you draw from. Its something nvidia did a fairly decent job with, as it adds:

    - typesafe enums and flags
    - support for standard library containers
    - ability to use references instead of pointers in many locations
    - optional exception support
    - "CreateInfo" structs no longer have issues with uninitialized values

    Everything is also encapsulated in a "vk" namespace, and the vk prefix is removed from everything and the enum + struct names are seriously cleaned up. Its not a high-level wrapper by any means, but it does make it handle more like C++ and less like outdated C (which is odd tbh, because vulkan accommodates C++14 programming techniques just fine, but is conceptually laid out like old C). Only problem with vulkan.hpp is that it's not documented or demo'd like regular 'ol vulkan.h is, so the best bet is to just read through the header and figure out how things translate yourself. Unfortunately, this also makes it easy to miss features, like the ability to turn VkResults into more descriptive strings or the ability for vulkan.hpp to take care of grabbing debugging function pointers for you.
    Reply With Quote Edit / Delete Reply Windows 10 Chrome United States Show Events Informative Informative x 1 (list)

  20. Post #540
    cam64DD's Avatar
    November 2008
    949 Posts

    Added Percentage-Closer Soft Shadows to Bulletin.

    So now, shadows can be SUPER SOFT or SUPER HARD.

    And guess what, the implementation actually gave me an FPS boost compared to my old shitty custom shadow casting solution!
    Reply With Quote Edit / Delete Reply Windows 10 Chrome Show Events Programming King Programming King x 4 (list)

  21. Post #541
    Gold Member
    HiredK's Avatar
    November 2006
    433 Posts
    Some experiments I've been doing with planet rendering lately. I'm trying to simulate a gravitational shift so that objects can get caught in a planet orbit.

    Reply With Quote Edit / Delete Reply Windows 10 Chrome Canada Show Events Winner Winner x 6Artistic Artistic x 1 (list)

  22. Post #542
    Gold Member
    paindoc's Avatar
    March 2009
    9,414 Posts
    Some experiments I've been doing with planet rendering lately. I'm trying to simulate a gravitational shift so that objects can get caught in a planet orbit.

    rated winner because that's also a really good song.

    How do you render the grass currently? I want to explore this technique when the time comes around http://outerra.blogspot.com/2012/05/...rendering.html
    Reply With Quote Edit / Delete Reply Windows 10 Chrome United States Show Events Agree Agree x 1 (list)

  23. Post #543
    Gold Member
    HiredK's Avatar
    November 2006
    433 Posts
    rated winner because that's also a really good song.

    How do you render the grass currently? I want to explore this technique when the time comes around http://outerra.blogspot.com/2012/05/...rendering.html
    I actually tried to replicate what Brano is explaining in the comments of this article, the "use vertex id in shader to drive the generation process" comment. I'm currently using a virtual vbo (basically just a static index buffer) to send 160K (400 * 400) GL_POINTS to the GPU, then in a vertex shader I place them in a grid using gl_VertexID, like this:

    Code:
    float px = u_Offset.x + (gl_VertexID % 400) * u_Offset.z;
    float py = u_Offset.y + (gl_VertexID / 400) * u_Offset.z;
    
    float h = texTile(s_HeightNormal_Slot0, vec2(px, py), u_HeightNormal_TileCoords, u_HeightNormal_TileSize).x;
    gl_Position = vec4(px, h, py, 0);
    Then in a geometry shader, I add some random offsets to the grid and build 3 intersecting planes like this:

    Code:
    for(int i = 0; i < 3; i++)
    {
        vec3 vBaseDirRotated = (rotationMatrix(vec3(0, 1, 0), sin(u_Timer * 0.7f) * 0.1) * vec4(vBaseDir[i], 1.0)).xyz;
    
        ....
    
        // Grass patch top left vertex
        vec3 vTL = vGrassFieldPos - vBaseDirRotated * fGrassPatchSize * 0.5f + vWindDirection * fWindPower;
        vTL.y += fGrassPatchHeight;
        vec3 vTL_deformed;
        gl_Position = getDeformedPos(vTL, vTL_deformed);
        fs_TexCoord = vec2(fTCStartX, 1.0);
        fs_Deformed = vTL_deformed;
        EmitVertex();
    		
        // Grass patch bottom left vertex
        vec3 vBL = vGrassFieldPos - vBaseDir[i] * fGrassPatchSize * 0.5f;
        vec3 vBL_deformed;
        gl_Position = getDeformedPos(vBL, vBL_deformed);
        fs_TexCoord = vec2(fTCStartX, 0.0);
        fs_Deformed = vBL_deformed;
        EmitVertex();
    		
        // Grass patch top right vertex
        vec3 vTR = vGrassFieldPos + vBaseDirRotated * fGrassPatchSize * 0.5f + vWindDirection * fWindPower;
        vTR.y += fGrassPatchHeight;
        vec3 vTR_deformed;
        gl_Position = getDeformedPos(vTR, vTR_deformed);
        fs_TexCoord = vec2(fTCEndX, 1.0);
        fs_Deformed = vTR_deformed;
        EmitVertex();
    		
        // Grass patch bottom right vertex
        vec3 vBR = vGrassFieldPos + vBaseDir[i] * fGrassPatchSize * 0.5f;
        vec3 vBR_deformed;
        gl_Position = getDeformedPos(vBR, vBR_deformed);
        fs_TexCoord = vec2(fTCEndX, 0.0);
        fs_Deformed = vBR_deformed;
        EmitVertex();
    }
    And finally I use a threshold value to discard pixels using a grass atlas texture in the fragment shader. I'm pretty sure rendering the actual blade geometry like outerra are doing is a better approach, but I haven't found a good way to do it yet. The performance is really good, I hardy notice a dip in the FPS when toggled on/off. I actually managed to get some shadows working in a older build, here's what it looks like:

    Reply With Quote Edit / Delete Reply Windows 10 Chrome Canada Show Events Programming King Programming King x 2 (list)

  24. Post #544
    Gold Member
    paindoc's Avatar
    March 2009
    9,414 Posts
    I actually tried to replicate what Brano is explaining in the comments of this article, the "use vertex id in shader to drive the generation process" comment. I'm currently using a virtual vbo (basically just a static index buffer) to send 160K (400 * 400) GL_POINTS to the GPU, then in a vertex shader I place them in a grid using gl_VertexID, like this:

    Code:
    float px = u_Offset.x + (gl_VertexID % 400) * u_Offset.z;
    float py = u_Offset.y + (gl_VertexID / 400) * u_Offset.z;
    
    float h = texTile(s_HeightNormal_Slot0, vec2(px, py), u_HeightNormal_TileCoords, u_HeightNormal_TileSize).x;
    gl_Position = vec4(px, h, py, 0);
    Then in a geometry shader, I add some random offsets to the grid and build 3 intersecting planes like this:

    Code:
    for(int i = 0; i < 3; i++)
    {
        vec3 vBaseDirRotated = (rotationMatrix(vec3(0, 1, 0), sin(u_Timer * 0.7f) * 0.1) * vec4(vBaseDir[i], 1.0)).xyz;
    
        ....
    
        // Grass patch top left vertex
        vec3 vTL = vGrassFieldPos - vBaseDirRotated * fGrassPatchSize * 0.5f + vWindDirection * fWindPower;
        vTL.y += fGrassPatchHeight;
        vec3 vTL_deformed;
        gl_Position = getDeformedPos(vTL, vTL_deformed);
        fs_TexCoord = vec2(fTCStartX, 1.0);
        fs_Deformed = vTL_deformed;
        EmitVertex();
    		
        // Grass patch bottom left vertex
        vec3 vBL = vGrassFieldPos - vBaseDir[i] * fGrassPatchSize * 0.5f;
        vec3 vBL_deformed;
        gl_Position = getDeformedPos(vBL, vBL_deformed);
        fs_TexCoord = vec2(fTCStartX, 0.0);
        fs_Deformed = vBL_deformed;
        EmitVertex();
    		
        // Grass patch top right vertex
        vec3 vTR = vGrassFieldPos + vBaseDirRotated * fGrassPatchSize * 0.5f + vWindDirection * fWindPower;
        vTR.y += fGrassPatchHeight;
        vec3 vTR_deformed;
        gl_Position = getDeformedPos(vTR, vTR_deformed);
        fs_TexCoord = vec2(fTCEndX, 1.0);
        fs_Deformed = vTR_deformed;
        EmitVertex();
    		
        // Grass patch bottom right vertex
        vec3 vBR = vGrassFieldPos + vBaseDir[i] * fGrassPatchSize * 0.5f;
        vec3 vBR_deformed;
        gl_Position = getDeformedPos(vBR, vBR_deformed);
        fs_TexCoord = vec2(fTCEndX, 0.0);
        fs_Deformed = vBR_deformed;
        EmitVertex();
    }
    And finally I use a threshold value to discard pixels using a grass atlas texture in the fragment shader. I'm pretty sure rendering the actual blade geometry like outerra are doing is a better approach, but I haven't found a good way to do it yet. The performance is really good, I hardy notice a dip in the FPS when toggled on/off. I actually managed to get some shadows working in a older build, here's what it looks like:

    I was wondering if you did something like that, because it looked quite nice and had an impressive amount of detail. I can't say that I've found a lot of resources on grass rendering ad generation, so I'm going to have to add this post to my list of resources. I imagine I'll be trying a number of techniques too.

    After getting window creation working yesterday, I got logical device and command queue creation working today, and sent some basic transfer commands to/from the gpu. Next is buffer objects and basic shader functionality, at which point I should be able to get a couple of rendering tests working

    Edited:

    Oh god, almost forgot that I still need to setup the swap chain sometime soon too since double buffered rendering is close to essential
    Reply With Quote Edit / Delete Reply Android Chrome United States Show Events Friendly Friendly x 1 (list)

  25. Post #545
    Gold Member
    antiChrist's Avatar
    March 2011
    595 Posts
    been working on a rigid body skateboard test thingie

    didn't expect it to go rogue

    Reply With Quote Edit / Delete Reply Windows 10 Chrome Israel Show Events Funny Funny x 15Zing Zing x 1 (list)

  26. Post #546
    Meow :3
    Ac!dL3ak's Avatar
    July 2005
    6,189 Posts
    I uh

    I just got a really awesome job offer???

    I am super exite
    Reply With Quote Edit / Delete Reply Windows 10 Chrome United States Show Events Winner x 26Programming King x 1Friendly x 1Disagree x 1Optimistic x 1 (list)

  27. Post #547

    January 2012
    483 Posts
    I uh

    I just got a really awesome job offer???

    I am super exite
    Well don't leave us hangin, what do they do?
    Reply With Quote Edit / Delete Reply Windows 10 Chrome Canada Show Events Agree Agree x 1 (list)

  28. Post #548
    Gold Member
    Berkin's Avatar
    October 2013
    1,845 Posts
    I wrote a prescription drug name generator last night because I was bored.

    Code:
    [rs:20;\n]
    {
        [case:first]
        {
            is{o|a}|
            l{e|i}{v|t|m}{i|o|a}|
            ox{y|i|o}|
            n{e|i}x|
            ins{u|o}|
            {z|s}{e|i}t{i|o|y|a}|
            ne{o|a}|
            ep{o|i|a}|
            {t|p}r{u|o|a}|
            di{o|a}|
            {v|f}{i|e}{a|o}|
            r{i|a}{t|v|d|n|m}|
            sy{n|m}|
            pr{e|i}|
            d{i|e}x|
            se{n|m{|i}}
            av{o|i|o{l|n|m|r}}
            b{e|a|i}n
        }
        {c|t|x|th|z}
        {
            or|ia|il|
            {r|l}
            {
                a|o|
                {o|i|a}{x|x{y|i|a}{l|m|lt}}
            }|
            i{u|a}m|tin|fi{b|m|l{|um}}
        }
        {|{fin|la|pra|min|ya|se|ide|gen|ci{a|um}}}
        [reg]
    }
    Output:
    Setocil®
    Isoztinpra®
    Isocra®
    Synthorse®
    Sitythlo®
    Limoxtin®
    Ramziaide®
    Epathiumcium®
    Litacfim®
    Dixciumfin®
    Neacil®
    Insuzia®
    Diotfilumya®
    Epotilya®
    Pritfilum®
    Dextfim®
    Epaclo®
    Oxyxlaxilt®
    Senavobinztinmin®
    Neaziamcium®
    Reply With Quote Edit / Delete Reply Windows 10 Chrome United States Show Events Winner x 19Funny x 7Artistic x 2Programming King x 2Agree x 1 (list)

  29. Post #549
    Gold Member
    Dr Magnusson's Avatar
    July 2008
    2,848 Posts
    I came across the most mindfuck dense code I've ever seen at work the other day. Luckily it's not code that's run very often, but almost every single line is wrong or redundant.

    Some background: The code is meant to get a list of groups that exist via an API, then update its own database by removing the groups that no longer exist. The code is written from memory and some boilerplate is omitted for clarity.

    real_group_ids = API.GetAllGroups()
      .Select(group => group.Id);
    
    existing_groups = db.Groups
      .Select(group => group.id)
      .Where(id => real_group_ids.Contains(id));
    
    groups_to_delete = db.Groups
      .Where(group => !existing_groups.Contains(group.id));
    
    foreach(var group in groups_to_delete)
    {
      while(group.Devices.Count() > 0)
      {
        group.Devices.Remove(group.Devices.Front());
    
        while(group.Products.Count() > 0)
          group.Products.Remove(group.Products.Front());
    
        db.Groups.Remove(group);
      }
    }
    

    So a play-by-play here:

    1) It gets all the groups from the API, all good.

    2) Instead of doing a conditional select on the DB and just straight up finding all the groups that aren't in the list the API provided, it instead gets a list of all group ids that SHOULD exist.

    3) Then, it uses that list of group ids that should exist, to get all the groups NOT in that list (bonus here for extracting the id in the first place and then going back to search by id to get the original group)

    4) Looping over all the groups, it:

    5) Not only tries to remove all the devices attached to the group one by one instead of removing them all at once, but

    6) Fails to close the while-loop, meaning that for every single device in the list it:

    7) Deletes the device

    8) Removes all products attached to the GROUP, again one by one instead of all at once, even though after the first device is removed, not only does the group no longer exist (see #9), the products have already been removed ONE BY ONE in the previous iteration!

    9) It removes the whole group from the database every time it's removed ONE device, meaning that when it comes back around for removing the second device, the group is already dead
    Reply With Quote Edit / Delete Reply Windows 7 Chrome Denmark Show Events Funny Funny x 2Winner Winner x 1 (list)

  30. Post #550
    Dennab
    October 2016
    1,897 Posts
    I've been teaching my self multi-threading and implementing asynchronous processing in my world generation program and it's going mildly okay. I've managed to render my entire environment with only a few mild pixel errors on my resulting sphere map. I really need to de-OC my cpu to see if that's an issue with my latest OC changes or if it's my program, so I'll either have a massive headache or less of one here soon.



    note: real heightmaps are far larger than 1024x512 and doing them single thread is the single worst waste of anyone's time.

  31. Post #551
    Grammar Nazi General
    Adelle Zhu's Avatar
    April 2009
    2,088 Posts
    This thread makes me feel inadequate with my few small Flask sites.
    Reply With Quote Edit / Delete Reply Android Unofficial Facepunch Android App United States Show Events Agree Agree x 2Artistic Artistic x 1Friendly Friendly x 1 (list)

  32. Post #552
    Dennab
    October 2016
    1,897 Posts
    Don't worry, it turns out the more threads I add to the process the worse my artifacting gets which means I've not really achieved much other than making a process worse.

    I think part of the problem is that I'm working with an outdated 32bit library (that was last released in like 2011) and I have absolutely zero knowledge about writing libraries so I can't recompile it for x64, especially because that involves having to re-write an entire library which I just don't know how/want to do.

    That and my threads are outputting to the exact same height map class and I have absolutely zero mutex stuff going on right now, which is generally a problem. In fact I think most of my problem may be the lack of mutex.

    edit:

    After implementing some mutex stuff I can safely say that it hasn't been any more helpful in reducing excess noise on my output. So fuck me.

  33. Post #553
    Gold Member
    Berkin's Avatar
    October 2013
    1,845 Posts
    Update: It has side-effects now.

    name:
    Zitizlopra®

    side-effects:
    nausea, blurry vision, hearing loss, tooth loss, projectile vomiting, ringing of the ears, and abnormal nail growth
    name:
    Truvaxide®

    side-effects:
    sensitivity to light and frequent urination
    name:
    Nexviala®

    side-effects:
    massive stools
    name:
    Isolammin®

    side-effects:
    fatigue, low blood pressure, muscle spasms, drooling, trouble breathing, acne, heartburn, abnormal hair growth, insomnia, uncontrollable laughing, headache, weight gain, trouble swallowing, uncontrollable gas, tooth loss, projectile vomiting, and itching
    Reply With Quote Edit / Delete Reply Windows 10 Chrome United States Show Events Funny Funny x 11Winner Winner x 2 (list)

  34. Post #554
    tW4r's Avatar
    August 2013
    135 Posts
    Update: It has side-effects now.
    Reminds me of one of the responses of the priest unit in Warcraft III:
    Side effects may include: dry mouth, nausea, vomiting, water retention, painful rectal itch, hallucination, dementia, psychosis, coma, death, and halitosis. Magic is not for everyone. Consult your doctor before use.
    Reply With Quote Edit / Delete Reply Android Unofficial Facepunch Android App Lithuania Show Events Funny Funny x 1 (list)

  35. Post #555
    BIG TITLE
    Pat.Lithium's Avatar
    November 2009
    11,650 Posts
    i spend 4 hours last night trying to get this to work because the due date said monday the 25th of april 2016.

    they updated the due date to 1st of may, i was so fucking stressed out, im so tired and its not even due for another 4 days. at least i have a proper understanding behind some swift shit that i was just using before.



    Edited:

    also i accidentally made the background black and i have no idea how to change it back to default.
    Reply With Quote Edit / Delete Reply Mac Chrome Australia Show Events Friendly Friendly x 3 (list)

  36. Post #556
    tschumann's Avatar
    March 2009
    647 Posts
    I've been looking to add some basic previewing of Half-Life/Half-Life PS2/007 Nightfire (and later on Source engine) .bsp files to my multitool: http://www.teamsandpit.com/#gssmt
    I started playing around with OpenGL immediate mode (wrong thing, I know) today and realised I've probably got a lot of reading to do.

  37. Post #557

    May 2016
    187 Posts
    I've been looking to add some basic previewing of Half-Life/Half-Life PS2/007 Nightfire (and later on Source engine) .bsp files to my multitool: http://www.teamsandpit.com/#gssmt
    I started playing around with OpenGL immediate mode (wrong thing, I know) today and realised I've probably got a lot of reading to do.
    If you're going to do a lot of reading don't waste it on immediate mode. It's not actually easier than proper Opengl it just seems that way at first but really, take a look at learnopengl.com. It'll teach you everything you need to do nifty stuff in opengl using core profile and it's very accessible.
    Reply With Quote Edit / Delete Reply Android Chrome United Kingdom Show Events Useful Useful x 1 (list)

  38. Post #558
    tschumann's Avatar
    March 2009
    647 Posts
    If you're going to do a lot of reading don't waste it on immediate mode. It's not actually easier than proper Opengl it just seems that way at first but really, take a look at learnopengl.com. It'll teach you everything you need to do nifty stuff in opengl using core profile and it's very accessible.
    Thanks - yeah I only started with immediate mode because it's the only thing I knew anything about.

  39. Post #559
    Gold Member
    fewes's Avatar
    December 2006
    1,734 Posts
    I wrote a basic little distortion/refraction shader for Unity. Really it's just a fresh rewrite (with some improvements) of an old messy shader I found on the internet that I have been meaning to do for a while. Unfortunately it does not work fully in VR but I'm looking into that right now.



    Code available here.
    Reply With Quote Edit / Delete Reply Windows 7 Chrome Sweden Show Events Winner Winner x 2 (list)

  40. Post #560
    moonmoonmoon's Avatar
    April 2017
    10 Posts
    I'm working on lamps and dynamic lights and overall coziness

    Reply With Quote Edit / Delete Reply Mac Chrome Netherlands Show Events Artistic Artistic x 11Winner Winner x 3 (list)