1. Post #1
    Certified Catgirl Maid
    slayer20's Avatar
    January 2006
    9,357 Posts
    So I was watching a couple tutorial videos for C++ programming and the guy was going over signed, unsigned, short, and long INT types.

    He mentioned to go ahead and always use long, and not short, but to keep in mind that long has a larger file size than short.

    My question is this though, in a game design sense, when might it be better to use short compared to long?

    A few examples would be nice.

  2. Post #2
    dajoh's Avatar
    March 2011
    625 Posts
    On most platforms a char is 1 byte, short is 2 bytes, int is 4, and the size of long depends on the OS.
    Whatever tutorial you're using, it's ass, longs are pretty much never (explicitly) used over ints.

    In game design you don't really need to care much about the capacity of your integers, unless you're storing a very large number or trying to make your networking protocol as efficient as possible.
    Reply With Quote Edit / Delete Reply Windows 7 Sweden Show Events Agree Agree x 10 (list)

  3. Post #3
    sim642's Avatar
    July 2010
    1,039 Posts
    int is the most general one that is used for all integer needs. short and long should be used only when there are special requirements for the size.
    Reply With Quote Edit / Delete Reply Windows Vista Estonia Show Events Agree Agree x 2 (list)

  4. Post #4
    Please waste more of your money changing this title again.
    Gmod4ever's Avatar
    August 2005
    6,715 Posts
    As Dajoh said, the person who made your tutorial doesn't seem to really know what he's doing.

    Longs will almost never be used. There are only very extreme scenarios where you would ever have to actually use a long, due to the very nature of such extreme numbers.

    As Dajoh (again) said, a short is 2 bytes, or 16 bits. This means that the maximum size of an unsigned short is 2^16, or a bit over 65,000. This is fairly small, so you won't see shorts be used terribly often - in fact, one could almost argue you'd see chars, with a max size of 1 byte, or 8 bytes, which is unsigned 256, more than shorts, because generally if you're dealing with numbers that small, your numbers are probably less than 255, so characters work just fine.

    Longs vary in size; for 32-bit Windows, C++'s longs are 4 bytes, or 32 bits long, which means they have an unsigned capacity of 2^32, or a good bit over 4 billion.

    In 32-bit Windows, Java's longs are a massive 8 bytes, or 64 bits, though since Java requires its longs to be signed, you have a range of −9,223,372,036,854,775,808 to +9,223,372,036,854,775,807.

    So you should almost never have to use longs over shorts.

    And then, as mentioned previously, simple integers are the most common form, and they should do you fine. Only use chars if you know, for certain, your numbers won't be going above 255, and I honestly can't really think of an instance where you'd really use a short.

    Just stick with integers, unless those extraordinary circumstances for longs calls upon you.

  5. Post #5
    Gold Member
    Jookia's Avatar
    July 2007
    6,768 Posts
    Use int and char, and then use stdint.h for size specific stuff.
    Reply With Quote Edit / Delete Reply Linux Australia Show Events Agree Agree x 11 (list)

  6. Post #6
    dajoh's Avatar
    March 2011
    625 Posts
    It should be worth mentioning that longs are always 32 bits on Windows (because of backward compatibility), but on Linux they're 32 or 64 bits depending on if you have a 32 or 64 bit OS.

    And as Jookia said, you should use cstdint for when you absolutely have to have the right amount of bits.
    Reply With Quote Edit / Delete Reply Windows 7 Sweden Show Events Agree Agree x 3 (list)

  7. Post #7
    Person
    geel9's Avatar
    June 2008
    5,578 Posts
    Also, longs and ints don't really impact "file size" of a program.
    Reply With Quote Edit / Delete Reply Mac United States Show Events Disagree Disagree x 4Dumb Dumb x 3 (list)

  8. Post #8
    Gold Member
    MegaJohnny's Avatar
    April 2006
    5,185 Posts
    I lost a mark on a lab exercise because some statistics data I collected overflowed the regular ints I was using.

    Int should be fine for pretty much anything, just consider the limits of them if you start getting strange numerical errors.

    I agreed to help with a game some uni friends were making a while ago, and one of the programmers kept using UInt8 or something (in C++) if he knew the value wouldn't exceed 255. That really peeved me.
    Reply With Quote Edit / Delete Reply Windows 7 Russian Federation Show Events Dumb Dumb x 1 (list)

  9. Post #9
    Gold Member
    Trumple's Avatar
    September 2009
    6,197 Posts
    Also, longs and ints don't really impact "file size" of a program.
    -snip- apparently it does

    OP the tutorial you saw sounds rather misinformed, check out this table: http://www.cplusplus.com/doc/tutorial/variables/
    Only use larger data types if you actually need them, though int is commonly used even for things that would require 1 byte
    Usually not too much of a problem on computer platforms, but you'd need to be careful on an embedded system
    Reply With Quote Edit / Delete Reply Windows 7 United Kingdom Show Events Disagree Disagree x 1 (list)

  10. Post #10
    Gold Member
    gparent's Avatar
    January 2005
    3,949 Posts
    Almost everyone in this thread has either no idea what they are talking about, or they are purposely hiding details / simplifying things for you in order to keep it simple. Since I like doing neither of those things, I'll follow my disclaimer by a few quotations and why they are wrong.

    Disclaimer:

    Please do not take it personally if I quoted you, it just means that either what you wrote isn't factually correct, that you made a simple mistake, or that I'm being overly fucking pedant about it. No need to quote me and say "Oh I knew but <X> !!!". I'm human and I'll probably fuck up in this post too, don't take it personally if you do. Integral types' size in C++ is source of much confusion, so I tend to give a lot more detail than necessary to people who take the time to ask for clarification about the subject. So here we go.

    Myth number one:
    On most platforms a char is 1 byte
    On all platforms, the size of char is defined to be large enough to hold any character of the implementation's basic character set. The signedness of a char is implementation defined, which means it could be either unsigned or signed. The statement is also wrong if you use byte as a generic term for 8 bits because in C++, a byte contains CHAR_BIT bits, not necessarily 8 (CHAR_BIT is in <climits>). The standard says that sizeof(char) must equal 1, it is not optional.

    Myth number two:
    short and long should be used only when there are special requirements for the size.
    As others have mentioned here, types from the <cstdint> header should be used when you need specific bit sizes. Things like std::uint32_t are the way to go if you want to be sure to have 32-bits. Personally, I recommend using boost's cstdint header rather than your compiler's. It will reuse the latter's header if it makes sense to do so, so you gain a little bit of portability.

    Myth number three:
    Longs will almost never be used. There are only very extreme scenarios where you would ever have to actually use a long, due to the very nature of such extreme numbers.
    Okay, just to be clear, on most platforms long is typedef'd to int. In the C++ standard, here are a few key sentences:

    "There are five standard signed integer types : “signed char”, “short int”, “int”, “long int”, and “long long int”. In this list, each type provides at least as much storage as those preceding it in the list." - C++ standard

    So while the size of an integer type isn't really explicitly defined, you can be assured that a short is at least as big as a char, and an int is at least as big as a short, etc. Long long int is from C++11, that's the standards document I'm using at the moment but you'll find that C++03 is the same way.

    "Plain ints have the natural size suggested by the architecture of the execution environment[44]; the other signed integer types are provided to meet special needs." - C++ standard
    "44) that is, large enough to contain any value in the range of INT_MIN and INT_MAX, as defined in the header <climits>" - C++ standard

    "Maximum value for an object of type int INT_MAX +32767" - C standard
    "Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign."
    The last sentence allows larger integers than 16-bit ones.

    Only use chars if you know, for certain, your numbers won't be going above 255,
    No, don't. Use std::uint8_t. char's signedness means that on certain compilers, you will overflow if you assume the above.

    Myth number four:
    Use int and char, and then use stdint.h for size specific stuff.
    This should read "Jookia is a fucking badass" because it's the simplest and most relevant piece of advice in this thread. However it's half a myth because you should use <cstdint> instead of <stdint.h> because the latter is deprecated and should not be used.

    Myth number five:
    Also, longs and ints don't really impact "file size" of a program.
    This is wrong because of const values, which may be put in a read-only section of your executable, increasing its size.

    Myth number six:
    Int should be fine for pretty much anything, just consider the limits of them if you start getting strange numerical errors.
    Consider the limit ints have every single time you think it will be a problem. Don't wait for your program to throw errors, think about what the variable could hold and plan consequently. He's somewhat right in the sense that you shouldn't stress yourself with it either; int will do fine for 99% of purposes. But be careful.
    Reply With Quote Edit / Delete Reply Windows 7 Show Events Winner x 9Agree x 2Informative x 2Zing x 2Friendly x 1 (list)

  11. Post #11
    blankthemuffin's Avatar
    July 2009
    1,265 Posts
    In 32-bit Windows, Java's longs are a massive 8 bytes, or 64 bits, though since Java requires its longs to be signed, you have a range of −9,223,372,036,854,775,808 to +9,223,372,036,854,775,807.
    On top of what gparent said I'd like to note that Java's longs (and those of many other languages) are the same size regardless of platform. Unlike C and C++ the language specifies the exact size and representation of primitive types.

    http://docs.oracle.com/javase/tutori...datatypes.html

    I am also compelled to note that in some cases the size chosen can be very important for memory usage and performance as well as the obvious bit where you can end up with hard to debug problems. So it's something you actually want to pay attention to.

    Also noteworthy is that signed integer overflow is undefined behaviour.
    Reply With Quote Edit / Delete Reply Linux Australia Show Events Informative Informative x 1Agree Agree x 1 (list)

  12. Post #12
    dajoh's Avatar
    March 2011
    625 Posts
    On all platforms, the size of char is defined to be large enough to hold any character of the implementation's basic character set. The signedness of a char is implementation defined, which means it could be either unsigned or signed. The statement is also wrong if you use byte as a generic term for 8 bits because in C++, a byte contains CHAR_BIT bits, not necessarily 8 (CHAR_BIT is in <climits>). The standard says that sizeof(char) must equal 1, it is not optional.
    How is it a myth? On most platforms (x86, x86-64, ARM, etc.) a char is 8 bits.
    Are you saying this isn't correct?

    This is wrong because of const values, which may be put in a read-only section of your executable, increasing its size.
    Not only constant values, and not only in read-only sections. Most globals that aren't constant integers are going to be placed in a physical section, and globals that aren't set initially aren't going to increase the executable size, since no space is allocated for them until the executable has been loaded.

  13. Post #13
    Gold Member
    gparent's Avatar
    January 2005
    3,949 Posts
    How is it a myth? On most platforms (x86, x86-64, ARM, etc.) a char is 8 bits.
    Are you saying this isn't correct?
    It's not that it's incorrect, it's the way you said it. A byte in C++ can have more than 8 bits, so if you use the "byte == 8 bits" definition, you're wrong because of that, and if you use the "sizeof(char) == 1" definition of byte, your post is misleading because it isn't on MOST platforms but on ALL platforms that this must be true.
    Not only ..., and not only .....
    Right, I only gave information that was explicitly relevant to the quote I was disproving. Some strings can also end up in the executable, etc.

  14. Post #14
    dajoh's Avatar
    March 2011
    625 Posts
    It's not that it's incorrect, it's the way you said it. A byte in C++ can have more than 8 bits, so if you use the "byte == 8 bits" definition, you're wrong because of that, and if you use the "sizeof(char) == 1" definition of byte, your post is misleading because it isn't on MOST platforms but on ALL platforms that this must be true..
    When I said byte I didn't mean a C++ byte, I meant your common 8 bit byte.
    I see your point now, I should have said octet in my original post.

  15. Post #15
    Gold Member
    gparent's Avatar
    January 2005
    3,949 Posts
    When I said byte I didn't mean a C++ byte, I meant your common 8 bit byte.
    I see your point now, I should have said octet in my original post.
    Standard terms are fun like that.