I don't understand how people can say the effects look like shit, then bitch that they don't get them...
So, because I have a top of the line ATi card, I'm not allowed to enjoy a game?
That's cool, I just won't buy it. More money for me.
I really like PhysX, which is why I refuse to buy nvidia cards. This kind of feature should not be proprietary.
However, it does make Borderlands look awesome.
You guys realize that PhysX is not just for cloth and fluid right? It's literally the entire physics library from character controllers to simulating every physical object. It'd be like saying Havok was a gimmick because it had optional libraries for hair physics.
As someone who's actually used PhysX from a programming perspective I fucking loved it, it has a lot of really awesome features and it's pretty easy to use. It's really well documented, has a lot of sample apps, and it ran way better than any of the other physics libraries I tried (and I have an ATI card in the laptop I'm currently using).
haha sweet this really rules!! thanks gearbox i'm sure everyone with ati cards will really appreciate you pushing an unnecessarily proprietary technology that holds back half of the GPU market! rock and roll nvidia for life
It's like watching a TF2 Fragvid in real time..
So why can't AMD just make their own PhysX and get gaming companies to use it?
The whole point is that you should be giving the exact same experience to all of your customers regardless of what hardware they have, if my AMD card can more than handle these PhysX effects were they not done in PhysX, then why was it done in PhysX? This doesn't make PhysX or Nvidia better, it gives them an artificial advantage by making the game more detailed in some way.
I don't really like most of the PhysX stuff because it's like "Oo0o0o0 Physics and cloth" But the cloth especially doesn't fit AT ALL into what the game looks like. It's so smooth and realistic it feels wierd.
It's business, if you can give yourself an advantage, you do it. Dunno about you guys but I hardly think "Oh gee I better not get this card because it promoted proprietary software/functions".
This remind's me the bullshit Crytek pulled with a Crysis comparison.
I'm not complaining about Nvidia here, they'll do what they do, I'm complaining about these game devs who had a choice to give all their customers the same experience but decided not to.
Fragmentation is the root of all evil.
I had the choice between a 560 and a 6950 and I chose the 6950. PhysX just doesn't appeal to me. It's cool and all, but it's entirely a gimmick. I'd prefer to support AMD, they're not as dirty as NVIDIA are.
I wish AMD hadn't killed off the ATI brand but fuck 'em, it's still ATI to me.
I'd like to add to this, that CUDA's been used to create a sort of home super computer called a Tesla box, which you can create by chaining a bunch of really powerful graphics cards - it's been really useful for engineers and scientist because it's within a reasonable price range and very powerful. A ghetto edition of this was done with PS3s before Sony went berserk on everybody.
I love the whole concept of PhysX (Fucking love HW acceleration), but Nvidia decided to make CUDA closed to other HW manufacturers, so PhysX was bought and developed on CUDA, not to mention the amount of money they have poured into companies to use CUDA for stuff; I really wish OpenCL and the DX11 DirectCompute shader was used instead, its open-source and a lot more hardware agnostic.
The only reason CUDA has good documentation is because they have PAID a lot of good developers, and they've been forced to release the documentation for it.
Oh also: Havok is developing for OpenCL now, for example their cloth physics is OpenCL powered, I'd love to see the rigid body sim also get the HW treatment.
CUDA may be "closed", but it's only "closed" in the sense that the source code is available - NVIDIA aren't the only ones who use it. A lot of really cool stuff has been made with CUDA... incoming video spam.
I like options in the market, but fragmentation is silly. Choices between graphics cards needs to be like the Android market in that you pay for something that does the same thing at a cheaper price, not the iOS vs Android market where you're forced between two different things.
What I'm saying is: Nvidia have either payed, or have heavily influenced developers to use CUDA; since most big developers document their stuff fairly well, CUDA has been WELL documented; while OpenCL lacks that.
A good example of developers who would like to stay agnostic on what APIs they use: DICE. They don't touch much stuff unless its available on most all hardware.
Also: I'm all about HW acceleration, I just think CUDA is going about it wrong whilst OpenCL is doing the right thing in being compatible on as much as possible.
I still think it's a silly gimmick(for now).
*your definition of "benefits" may vary
This is why PhysX itself is not that great, regardless of how great it is to work with. Most developers don't want to throw half their player base out the window for these things that can really be done with everything else.