Capitalist Innovation. Just more in shitification. Nothing but the lowest common denominator.
I highly doubt it was the code. They probably reduced texture and background fidelity and compressed the cutscenes heavily. It’s simply impossible to reduce a codebase by 91% and have it still be recognizable. However, the N64 had better hardware and it’s possible that some code could be eliminated because the chips had features that made them code not needed. But not nearly 686 MB worth.
So part of it is that it was only 2 discs partially for marketing reasons, partially because of limitations of how discs work (ironically enough, other than storage size, carts are magnitudes better for gaming compared to discs.) Discs often have to have multiple copies of the exact same content in multiple places because of physical/spacial concerns on the disc regarding load times.
Part of it is, yes, slightly reduced texture fidelity and sound quality, but you’re sleeping on the compression- they actually invented an entirely new compression mechanism for the FMVs that was effectively magic. Add that into a bunch of tricks they did to take advantage of the n64 being vastly more powerful than the ps1, and the ability to take advantage of cartridges having insta-load capabilities, they were able to release a version of RE2 that had better controls, slightly less graphical fidelity, less z-index jutter, better animations, surround sound, more lore, VASTLY better load times, and did I mention better controls, because god damn does that need to be mentioned lol… in 9% of the space.
Mostly the difference between PlayStation games and N64 games was whether they came with a bunch of CD-quality audio tracks.
Once gpu hardware becomes good enough that even low end computers can support real time ray tracing at usable speeds, game developers will be able to remove the lightmaps, ao maps, etc that usually comprise a very significant fraction of a game’s total file size. The problem with lightmaps is that even re-used textures still need to use different lightmaps, and you also need an additional 3d grid of baked light probes to light dynamic objects in the scene.
Activision has a very interesting lighting technique that allows some fairly good fidelity from slightly lower resolution lightmaps (allowing normal maps and some degree of specular to work over a single lightmap texel) in combination with surface probes and volume probes, but it’s still a fairly significant amount of space. It also requires nine different channels afaik instead of the three that a normal lightmap would have. (https://advances.realtimerendering.com/s2024/content/Roughton/SIGGRAPH Advances 2024 - Hemispheres Presentation Notes.pdf)
Don’t blame the devs, blame the timelines and the pressures of implementing features.
Blaming the Devs in the era of fly-by-night employment, heavy sub-contracting, and AI integration into the dev cycle seems so painfully misplaced.
COD is pretty much just a huge confusing user interface with a bit of a game attached to it, released yearly. They for some reason force all assets for their online game to be installed if you install any cod game. It’s ridiculous.
I already know that people are going to excuse this practice or say it’s progress but it’s not excusable, space wasting is a big problem in modern game development. Especially since modern games do not use the same optimization, such as the fact that you do not need to store duplicate rotated or mirrored versions of textures. Since one idiot I’ve met on Lemmy doesn’t understand what that means and thinks I’m talking about actual mirrors. Here’s a short demonstration.
Here is an example of a texture tile from an RPGmaker game. It’s a lower quality but this concept does scale up and really applies to any game where textures are stored images and not solid colors or AI generated on the fly (basically the vast majority of games out there).
This is an example of Mirroring or Reflection. Yeah that’s right the word mirror can refer to a transformation I know wild but for people who are actual game devs you should know this already. Even though this texture is small if you have a lot like this which could easily be mirrored it can add up fast especially with larger textures.
This last one is called rotating, it’s not always ideal since some textures are orientation sensitive and could handle being mirrored but get messed up in tiling if they get rotated. So it can’t always be used but should be used in cases where it can be.Both of those are very computationally cheap and simple ways to save space on textures by only having as many as you need to paint the scene.
Another way to optimize is to simply use lossless compression schemes, which these images are already doing since they are .png files. This might seem like a no-brainer but I’ve seen many modern games which store textures completely uncompressed and waste a lot of space, especially for bigger textures. It also applies to FMVs and animated textures too. Use lossless compression standards for your assets, I really shouldn’t have to say that.
Finally one way to reduce size dramatically is to just omit assets that aren’t needed. If your machine isn’t 4K capable or doesn’t have a 4K display than 4K or higher graphics aren’t going to do you any good and are going to be a waste of space. Most games don’t let you omit them during the download process but worse, some games complain or redownload them if you delete them, despite them not being used at all. Basically these games could fit in a smaller size but they just don’t because they have duplicate unused assets that could be removed but either make it difficult or don’t let you at all.
I want to comment that “duplicate textures” isn’t typically wastes a lot of space. It’s one thing, but not typically the biggest thing.
The biggest thing is really unnecessarily high-definition textures, as you already said. If you have 4K textures for everything, that means you roughly use 5 MB per asset, even when using compression. If you have 2000 of these assets in your game, you immediately need 10 GB or storage space, and RAM when they are loaded into memory, and that’s what makes the game so heavy.
Just to make a counter-example, Luanti, which is like open-source Minecraft, uses 64x64 pixel assets at the most, where each asset consumes less than 1 KB of storage space if compressed, and that’s why even using a thousand assets in the game or more, the total game size is less than 30 MB. And that literally contains the whole game, including all assets and logic, which is just as complex as proprietary Minecraft.
It isn’t the biggest thing but it’s one example of space saving that isn’t done out of laziness. Obviously it’s easy to not do it in the drafting and design phase but it really should be optimized after the fact. No the big space wasters are the duplicate video scenes for higher resolutions that may or may not be used. The 2K and 4K videos waste a lot and that’s a big drawback if they aren’t ever going to be used if say, the person doesn’t have a monitor that goes that high.
The problem is, if you used normal compression formats, you would have to decompress them and then recompress them with the GPU supported formats every time you wanted to load an asset. That would either increase load times by a lot, or make streaming in new assets in real time much harder.
There are still other compression schemes which can be used to save space, and not compressing anything is a bad idea, it’s not the biggest waste of space but it is a waste.
Is there any way an additional decompression step can be done without increasing load times and latency?
There are a number of compression algorithms that prioritize decompression speed, usually at the expense of higher compression times.
It can actually be quicker to store them compressed because memory and bus bandwidth is often a bottleneck. So instead of the cpu or gpu wasting cycles waiting for data to be moved, some of that movement time is shifted to the processors by using compression. Especially if there are idle cores that could be put on that task.
As for going from one compression format to another, you could store them in the final format (and convert on install if it differs between hardware setups, repeating if another hardware setup is detected).
Though if there’s any processing done on the uncompressed data (like generating mipmaps or something), that conversion might not even cost extra because it needs to be decompressed and the new data compressed again anyways.
Though on that note, you’d get faster load times by just storing all of those preprocessed and faster install times by just sticking it all in the install download, so there is still a conflict between optimal load speeds and minimal storage space.
Choosing which resolution you install sounds like a great idea. How much would you estimate it would reduce the aforementioned 300GB game?
Probably a significant amount, by far the heaviest storage usage in any game is the duplicate videos at different resolutions.
I get a paragraph for anything politically complex. At best. Why are the replies here 2000 word essays?
It’s funny because you don’t even have to go that far to find examples of really poor space usage.
Final Fantasy VII has the entire game on each disc. Only the cutscenes are different between each disc, that’s why the natural breakpoint for the game after the party splits up was shifted, because the ending video was too big and required a disc by itself.
The second a developer doesn’t have to worry about something, they don’t. Give them 2TB NVMe, 5090, i9-14900k and 32GB of RAM, and suddenly that will all be at max utilization. But this isn’t a modern thing, it’s just one of many “necessity is the mother of invention” examples.
Another great example: Every modern desktop app and most mobile apps that just package & run an entire web browser for every single app. There is zero benefit to the user experience or resource utilization to use these sorts of tools, the only reason to do so is to allow code reuse & simplify development.
Especially that every desktop app re-packages a whole web browser is infuriating me, as it could easily be avoided. The operating system should just provide a web browser library, that can be dynamically linked by each application. It’s such an easy solution but it isn’t used :(
Don’t forget the next step in the evolution: each web browser ships inside its own entire operating system container
There is a project called tauri that uses the web browser libraries provided by operating systems. The problem is that each operating system has a different browser library, so features may not be supported or can work differently. Browser vendors seem allergic to actually following the standards for some reason.
Also by making the videos blurry https://m.youtube.com/watch?v=BaX5YUZ5FLk
Videos and textures are usually the biggest part of games, closely followed by audio
(☝︎ ՞ਊ ՞)☝︎
Nice video if you’re into these things. Learnt a lot
Game Dev here.I WISH we could still ship with N64 quality textures and audio. We’d use so much less disc space and probably finish sooner and cleaner.
FTL, Valheim, Muck, Brawlhala, Amongus, Lethal Company, Loop Hero, Papers Please, Balatro, Slay the Spire, Undertale, Stardew Valley, Dead Cells, Ion Fury … are all under 1 gig.
Selaco, Prodeus, Ultrakill, Project Warlock, Cultic, DUSK… all between about 2 and 5 gigs.
This is far from an exhaustive list.
You can ship games like these.
People have done it, and made a good chunk of change, with dev teams of between … what, a single person to a max of maybe 10? Less?
You need to wish to work at a different studio, with different management, maybe a different engine, not wish its possible to make a successful game without stupendously huge asset libraries.
Hell, even Alien Isolation, SOMA and No Man’s Sky are just above 20 gigs, MGS V is just under 30 gigs of on disk size.
It is totally possible to do pretty darn good graphics without breaking over 100 gigs of disk space.
None of these have anything even remotely close to 4k textures. We can argue all day about whether or not those are required for “good graphics” (I don’t think so either). But there’s no amount of optimization that compresses those textures without losing the fidelity you’re using them for.
It’s got absolutely nothing to do with the engine or optimization.
Uh, for the larger lists at the top of my post:
Yes, that is the point.
A game does not need to have 4K textures, does not need to have super high fidelity, super realistic graphics, to be successful.
…that is the point.
There is absolutely no unbreakable law of gaming that says a game’s success is directly proportional to or reliant on stupendously high res, high fidelity graphics.
Fortnite. Roblox. Minecraft.
Every goddamned Anime Waifu gacha game.
Stupendously successful and popular games.
Cartoony or low fidelity graphics.
…
For MGS V, Alien Isolation, No Mans Sky, SOMA… those are games that have pretty darn high fidelity graphics (No Mans Sky somewhat recently got a 4k texture including, major graphical overhaul update) … not quite as high fidelity as more recent, ‘cutting edge realism graphics’… but their on disk file sizes are in the ballpark of an order of magnitude less.
So uh… that would lend creedence to the idea that yes actually, there are a great number of optimizations and design paradigms that can and have been employed in the past to keep overall disk size of a game down… and those concepts are no longer being utilized by many big name game dev studios.
The majority of disc space on a game like CoD is textures, audio and FMV. There’s no compressing 4k textures to get them to a reasonable footprint without losing quality. Same for 4k FMV. It’s not management that drives the desire for high-res textures and diverse asset libraries, it’s generally the art team. Once they’re allowed to care about what kinds of shrubbery exist in Borneo and which exist in Minneapolis, you end up with 30 kinds of plants. Multiply that out for rocks, cars, rugs, etc and add in the expectation for 4k or 8k screens and individual assets get huge and the library gets huge.
You’re right that it’s possible to do “pretty good” graphics for less, but it’s telling that your examples are from a decade ago and/or heavily stylized.
From a decade ago and just as good as modern graphics. We stopped seeing actual return on graphical fidelity about a decade ago.
Yep, 4k textures, very high quality audio files and FMVs are very big and essentially impossible to meaningfully compress.
If you are saying its the art teams that are to blame… uh, they get their budget, headcount, marching orders… from managment, their team leads… right?
You could always have managment hire other artists with different skillsets… make different decisions about what level of resolution, fidelity, overall number of distinct textures, etc, is actually needed…
A video game is the sum of its parts… and there are teamleads in charge of each of those parts departments, who then hire for those departments, and then you have management and/or some kind of overall creative director(s) in charge of the… entire recipe of exactly what is going to be baked into the proverbial cake.
It is these people’s jobs to come up with an overall vision, and then ensure it is implemented on time, within the budget.
You know, ‘manage’ the game’s development.
Their overall ‘recipes’ including stupid huge texture sizes and what not… thats a choice, not some kind of God given or fundamentally unbreakable scientific, natural law of gaming.
…
As of the latest Steam Hardware Survey, about 7% of PC gamers have a 4K monitor.
Far more console players have a 4K capable TV, but it doesn’t really matter because no currently existing, or announced, upcoming console… none of them can actually, truly render anything with detailed, super realistic graphics at 4k 60 fps… to hit that, they have to use checkerboard rendering + frame upscaling tech… which makes the actual render resolution at 2K or less… often even 4K30fps is often still reliant on checkerboard / frame upscaling.
‘4K’ on a gaming console isn’t actually 4K, all that extra detail usually just gets wasted anyway, blurred out or otherwise lost by the checkerboard rendering or frame upscaling.
Generally speaking, the only games on consoles that can actually run at actual 4K are the not hyper realistic graphics games, they are the ones with simplified or stylized art.
…
Acting as if 4k and 8k textures are some kind of mandatory minimum that must be included in all releases of all games is ludicrous.
As Felix points out… just make these high end textures an optional, free DLC.
The AAA gaming industry has largely done the same thing the car and housing industries have done in the last decade: Everything for sale is now a high end luxury item, there are no more economy class cars, no more new, modest apartments.
This is insane and is fundamentally mismatched with the consumer base, especially right now as the US in particular, and broader world economy looks set for a serious downturn, which will obviously see less spending on emtertainment.
…
Also sure, I’ll give you that No Mans Sky is rather stylized, but they also recently released a massive graphical overhaul update that adds in those super high quality textures… and its still just a bit over 20 gigs of on disk space on my system.
If you think MGS V and SOMA and Alien Isolation have ‘highly stylized graphics’, not graphics which basically aim at being very realistic and true to life… with a bit of stylization thrown into character design / world design / etc … I don’t know what to say, I don’t know how you can say those games are ‘highly stylized’ in the way that like… Windwaker or Valheim or Selaco are.
4k textures do not become magically useless when you have a 1080p monitor. The thing about video games is that the player can generally move their head anywhere they want, including going very close to any texture.
Oof you really don’t know how any of this works, do you?
Unless we are talking about some kind of… gridded out master texture map that has a whole bunch of textures common to say a particular level, a particular biome or environment’s commonly used asset set, and then the game addresses sections of this one large image as the particular textures of particular objects… and that master file is 4K or larger, and just always kept in memory for say a particular level…
Then uh, no, if you have a 1080p monitor, and your entire screen is filled up in game by say a 4k texture of a wall, or a poster or something…
All you are doing is pushing 4x the pixels through the game and your system to ultimately still render at a maximum of the 1080p your monitor can show you… for every single texture in the game.
This is why it often doesn’t even make sense to run a more modern game at very high or ultra texture settings on a 1080p display… you just literally cannot see the difference, and all it does is slow down the game and your system.
…
You’d get better image quality and performance from having a 1080p texture, Anisotropic Filtering, and perhaps some degree of some kind of Anti Aliasing.
There are a small number of games that allow you to render the entire game at say, 105, 110, 125% your actual display resolution, and use that in lieu of Anti Aliasing… and in those scenarios, having your textures at 125% of 1080p can improve image quality by reducing jaggies in a much more brute force way.
But this is not usually done very often, because while yes, this can provide superior image quality to using many kinds of Anti Aliasing, it is usually massively less performant and will degrade your FPS significantly.
…
There are a myriad of possible scenarios where it could make sense for a certain class of textures in a particular game and engine might be significantly larger than the textures for common objects amd buildings and such… but that would be an extremely in depth, technical, specific and particular, case by case discussion.
The majority of disc space on a game like CoD is textures, audio and FMV. There’s no compressing 4k textures to get them to a reasonable footprint without losing quality
And you can’t make 4k quality optional because…?
Some games do that, especially at a generation border. It’s not a ton of extra effort, but it’s low-return: a game doesn’t sell better or get a lot of press for being smaller.
I hear you, but I will say that there’s a lot of indy games out that are great but mimic the graphics (and requirements) of old. Crow Country is a good one top of mind.
Point being it’s more about what people want to make, IMO.
It’s also about what people want to buy. If games with that aesthetic reliably sold like gangbusters, AAA would follow.
It’s also about what people want to buy. If games with that aesthetic reliably sold like gangbusters, AAA would follow.
Rubbish. In this case it is marketing spending which creates the demand, not the other way around.
Our marketing team isn’t good enough to change consumer behavior like that. I’d love to work with a team with that ability.
I don’t see it that way. You can compare it to movies.
Low storage space, cartoon-ish games are comparable to animated movies and japanese anime. While successful, not everybody watches them. There are people who prefer live action movies, which have photo-realistic high-fidelity graphics (after all, they are photos).
Low storage space, cartoon-ish games are comparable to animated movies and japanese anime.
Not really, not in the slightest.
*indie
deleted by creator
You’ll have to correct him 500 times for it to stick.
I’m jonesing for that edit
They didn’t even correct it this time, so I’m not keeping my hopes up.
300 gigs is fucking ludicrous, i’m genuinely shocked that anyone is delusional enough to defend that.
Welllll… everything in software development is trade-offs.
It’s honestly pretty rare that one solution is unequivocally “better” than another, across every dimension you might care about (which includes non-technical things).
The kinds of egregious defects you might think of as brazen incompetence or laziness are more often the result of everyone (technical and non-technical alike) refusing the actively pursue one side of a trade-off and hoping that the devs can just “nerd harder”.
Technical constraints as in the case of the N64 example can actually help avoid the “just nerd harder” fallacy, because they prompt serious discussions about what you can and can’t compromise on.
Ironically, when we sit here as users and complain about games not being optimized in this way or that, we’re also refusing to engage in a conversation about trade-offs and insisting that devs just “nerd harder”.
Edit: That’s not to provide any excuses for the blatant financialization of the industry which prompts the whole “don’t trade off anything, just have them nerd harder” mindset… but to warn yall that even if the market wasn’t ruled by greedy suits, we would probably still be feeling like old games managed to do more with less, cuz well… trading away 500MB of bundle size so you can get better logging of resource management in production wasn’t really an option.
Oh yeah?
What’s the tradeoff for not making 4k textured an optional download?
Theyre chasing a pixel fidelity higher than most peoples TVs at the cost of everyone’s disk space
Having two different configurations of assets requires making a system that can switch between them, separate deployments for them, some way to actually fetch the asset pack by the users, testing to make sure both configurations work correctly, actually deploying the separate asset pack during an update, and then spending time fixing bugs that inevitably come up with any added complexity.
Could they do it? Absolutely. Should they do it? Probably.
Would there be no downside, no tradeoff? Claiming so is plainly ridiculous.
Lol except that were talking images here, not domain specific matrices or something.
The complexity it’d take to optimise an engine for a SPECIFIC image size would be orders of magnitude harder than using any preexisting tools that support all images.
Its like rationalising why you can’t go in an area of a map cause the devs didnt test every single possible renderable frame in that area - sure its possible, but you’d be laughed out of every meeting for even suggesting it.
Also gotta ask, do you think updates have to redownload every single file of the game? Else why would they redeploy static images every update? (Because arguably the most common optimisations on the web exist for this exact situation, not having to resend large media files)
Don’t they already have to switch between assets to fit in lower-end sized VRAM?
Higher-ups noticed gamers think “realistic” = “good” and blame developers instead of executives for the resulting problems.
Welllll… everything in software development is trade-offs.
Trade offs between “let’s release this unfinished piece of junk NOW” and “let’s spend couple of months more and ensure the code is optimised and without major bugs”.
…and then there’s Star Citizen…
Ok but like, Kirby and the Forgotten Land Switch 2 edition + DLC is going to be 1mb smaller than the Switch 1 version without the DLC
Mmmmmm, shareholder profits.
I have the suspicion it’s not even about shareholder profits, it’s use dumb/useless metrics of success.
It’s the equivalent of measuring a programmer’s productive output in number of lines of code written. It leads to code like this:
Something similar happens with storage space: It is wasted unnecessarily because media designers are paid for “high definition” assets.
I was more of thinking the company prioritizing a yearly release schedule with little to no money/man power invested in optimization. Money not spent on the game is money to sate the shareholders.
I really like efficient code, and that includes memory and storage-efficiency.
Luanti, where i run a server rn, uses less than 1 GB of storage space for a huge world, and i think the whole program code for all of mineclonia+the core luanti engine only uses sth like 30 MB. it’s really storage-efficient.
I looked up luanti. Perfect time to get into writing mods again lmao.
Yet another thing awesome on luanti. My friends aren’t convinced it’s minecraft and say it’s a rip off just buy the Microsoft one and im like , no fuck Microsoft not doin it.
Ignores the 300 gig is largely already heavily compressed saving you terabytes of space
I’d argue that this is similar to stores writing “you saved x [currency]” on the receipt. There’s a lot of unnecessary data in AAA games.
Yeah, high def assets aren’t exactly light on disk space.