When the system requirements for Monster Hunter World were revealed a few weeks back, there was equal parts confusion and worry over the recommended specs. A GeForce GTX 1060 was recommended to play MonHun World at 1080p / 30fps, prompting concerns there may even be a frame rate cap in Monster Hunter World.

Fear not though, for ResetEra member FluffyQuack was allowed some hands-on time with the PC release of Monster Hunter World and can confirm that there is no frame rate capable in MonHun World whatsoever.

Monster Hunter World runs in Capcom’s proprietary MT Framework game engine. Most games in this engine are actually capped at 120fps, but Monster Hunter World is uncapped on PC. However, if you want to hit these high frame rates you’re going to need a very beefy system.

“I got permission from a Capcom rep to post impressions of a review build of the game,” wrote FluffyQuack. “The game supports framerates above 60fps (the framerate lock options are 30, 60, and “no limit”). Unlike most MT Framework games, the game can go beyond 120fps, though you’ll need a very beefy PC if you want to get close to 100 with settings around max.”

As for just how beefy a PC is required, it turns out Monster Hunter World is a fairly demanding beast. With a GeForce GTX 1080, Intel Core i7-4790K and 16GB RAM, the following average frame rates were achieved in the starting hub at 1440p resolution:

Low: 108fps

Medium: 65fps

High: 60fps

Ultra: 44fps

Unsurprisingly, volumetric lighting is a demanding graphics setting in Monster Hunter World, and dropping this from Ultra to High pushes the average frames per second up to a respectable 50fps. Based on this, it seems reasonable to expect that a GeForce GTX 1070 should be able to push a solid 60fps at 1080p Ultra, although performance on a GTX 1060 or AMD Radeon RX 580 may be a little more concerning.

A major thing to remember with these results is that we’re still a fortnight out from Monster Hunter World’s August 9th PC release. There could still be optimisation work to be done and Nvidia has yet to release its Game Ready GPU driver for MonHun World.

Visually, Capcom has said it's aiming for graphical parity with the console release.

Should look great in 4k but don't hold out for 4k on 60fps as the game is poorly optimised if you want a locked 60fps go 1440p but then that will only match the PS4 Pros graphics and res but with a slightly higher frame rate.

As Capcom have said they are keeping a visual parity with the console versions so won't exceed the graphics of console so you need a 1080ti to get a locked 60fps with same graphics and res as the PS4 pro that's how unoptimised the game is.

There are many nuances, like LODs, draw distances, dynamic resolution rendering, etc that the PC can render at higher detail/distance/density than the console, yet using the same assets. "Same visual parity" does not equal "it will look the same", it merely means "we didn't remaster the assets". The fact that the framerates are unclocked is alone a huge jump in visual fidelity (to me, anyway) compared to the cancer framerates of the PS4.

Well that's the thing you are switching between them, try doing this. Play on 75Hz one day, switch 60Hz without playing and then play at 60Hz the next day. Our eyes(our brains more accurately) are good at noticing sudden differences.

Dunno, man, it can happen overnight and I'd notice in the morning. I also don't like sitting at other people's computers (my friends, my gf, etc) due to the 60Hz screens feeling slow.Maybe you're not as sensitive to refresh rates, which is ok. If 30fps works for you - that's fine. But to suggest that lower framerates must work for everyone - that's just silly.

It is… Its got a 312mm^2 die, the GTX 560 has a 332mm^2 die… Also the GTX 1080 is GP104(GeForce Pascal 104) chip, the GTX 560 is a GF104( GeForce Fermi 104) chip…The GTX 560 came out costing 200$ at launch(254$ in 2016) the GTX 1080 costed 700$ at launch… Not to mention that Pascal had 2x lower failure rate than Fermi…

But haven't transistors been getting smaller so they can fit more onto a smaller die. Plus it makes no sense for a high end GPU to be many times less powerful than a mid ranged GPU, honestly I think die size being what determines the tier of a card is BS.

lol the point is that there's literally only 2 cards that are somewhat affordable and meant for gaming that would outperform the 1080 constantly. It being in the top 3 of gaming cards kinda makes it a high range card no matter what.

Performance isn't a price nor tier factor… A GTX 280 is high-end till this day since it's a 567mm^2 die size and would cost Nvidia just as much to produce on the 65nm process node as it did back in 2009

Die size, r&d and failure rate are price and cost factors, along with wafer cost and vram cost(8gb gddr5x costs 11.5$ to AMD and Nvidia)

Why would I measure it like that?Something is high end if it's among the best of a generation and the gtx 1080 is. Everything else is completely unimportant and I see no reason why anyone would care about the die size whatsoever.

What it comes down to is only the performance and where it was placed by nvidia, and as we all know the xx80 (ti) is always on top of the normal gaming cards.

I measure it like that, because that is how it is measured… It's NOT subjective. And the GTX 1080Ti/GTX Titan Xp are GP102 at 471mm^2 and not the best Pascal GPU Nvidia has/had to offer, Nvidia has/had GP100 at 602mm^2 that they didn't release due to lack of competition from AMD, so there could have been an even better GPU than the GTX 1080ti a GTX 1090 for example…

You don't get the point at all, answering to the other comment I can't reply to. Even if it's not the best they could offer at the time, thus not the highest tech available, it's still in the high end if it's among the best products of its generation, which it is. That isn't debatablein any way, that's how it is. Not "high tech" still the high range product.

So a new GPU with a die size double that of the 1080 but half the performance would be a high end card? 99.9% of people will say the 1080 is a high ranged card, what has die size got to do with anything, they can fit more on a smaller die due to the fact (correct me if I'm wrong) that transistors have gotten a lot smaller over time.

Die size is what determines if a GPU is high-end or low-end or in-between.

And 99.9% of people used to say that the earth is flat… they were not correct…

Performance is a byproduct… An chip is high end based on die size and the cost factors are die size, R&D costs and failure rate, along with wafer cost and Vram cost(the last two just go with inflation and Vram is super cheap, 11.5$ per 8GB module GDDR5(X)).

Die size relative to the biggest possible to produce and cool die is what matters… Again, not opinion, facts… people can say whatever they want, especially ignorant youtubers and reviewers…

Almost no one buys hardware based on die size. Everyone looks at the things that matter; specs, clock speeds, Vram, performance ect… Since there is "no high end card", that would automatically put the 1080(Ti) as the highest tier card. The tiers go something like this: XX30= Low end, XX50,60= Mid ranged, XX70,80= High end, XX80Ti/Titan= Enthusiasts. Not: XX30= Low end, XX50,60,70,80,Ti,Titan= Mid ranged.

I do understand your points, however they don't really apply to anyone except those couple of Youtubers and you.

Look the performance of a card doesn't make it high end or low end…And these points apply to everyone, like it or not…And the gtx 1050ti and gtx 1060 is low-end… -_-The gtx 1050 and gt 1030 are entry chips…

And people don't buy them based on that, because they are ignorant… that's why they think that if a card's name is "XX80" it is a high-end GPU and should cost appropriately, when the gtx 1080 should have costed 300$ MSRP at launch in 2016, when the gtx 560 costed 200$(254$ in 2016) at launch in 2011 and it was a much more expensive card due to higher R&D costs and higher failure rate, along with bigger die..

I OBVIOUSLY bought my 1080 because i wanted a Mid range Card and wanted that die size. Had nothing to do with that it outperformed almost every card out there and i wanted the best….sorry…that came out wrong….wanted mid tier best.

why do people care if their GPU is mid-range or high-end?… -_-Performance and Tier(low-end, mid-range, high-end) are separate… There can be a GPU that is 10x times better than all the rest of GPUs for 10 years it can still be a mid-range or low-end card, it can be a high-end card too, it's all about the die size… -_-

GPU tier doesn't represent performance, it represents die size and thus the cost and price the chip should be sold at… -_-

As I said performance is a byproduct… stop caring if your GPU is high-end, low-end, entry, etc… care if the performance it gives you is good enough for your need… The only reason you should care if a GPU is high-end, low-end or mid-range is its price…

The gtx 1080 should have come out costing 300-330$ MSRP in 2016 and NOT 700$… At 700$ that was above 100$ profit margin… -_-Nvidia is technically 3x times more overpriced than apple… and I bet you people think apple is overpriced…

Let' just agree we both have different definitions of "High end" and leave it at that, as arguing is getting us no where. For me high end is the highest tier card that is AVAILABLE to the public, not what could've been available, as that basically means nothing is high end.

Also what about CPUs, are the highest performance ones still mid-ranged as their die is a little small?

-_- when will you understand that this is NOT an opinion and that it's a FACT…

Thread-ripper 12 and 16 core are in the lower-end of high-end, Ryzen 7 is mid-range, ryzen 5 6 core is low-end, Ryzen 3 and ryzen 5 quad cores are entry level.i7 8xxx is mid-range, so is the i5 8xxx the i3 8xxx are low-end.

CPUs can't be cooled as easily as GPUs and thus they have smaller die sizes, along with AMD and Intel wanting them to have as small die sizes as possible for the biggest yields as possible, that's why intel's cores have barely increased in transistor count since sandy bridge, but have become smaller and smaller.

Well the next time I buy a GPU I will definitely look only for the die size on the box and complete ignore all other specs and how well it performs because they're not important. Oh and if it performs bad I'll just increase the die size which will magically increase the tier of it from mid-ranged to high, thanks.

Also there really is no official tiers of GPUs, there isn't a rule book that states what tier a GPU is, they're all down to what the user sees as important, if you measure tiers on performance, fine, if you measure it on specs such as shaders and Vram, fine, if you measure it by die size, by all means go ahead.

But I've been meaning to answer the question: what ON the actual die makes a difference? Is it the higher transistor or core count, because if you're basing it just of the size and completely ignoring what is ON the die itself, then you're just making empty arguments.So in short: is it just the size of the wafer and nothing else, or how much of a certain thing they can fit on it?

there are tiers of die sizes, measured in quarters cut down from the biggest possible die and that's how they cut them. The GP100 is 2x(100%) times bigger than the gtx 1080(GP104), 1.5x(50%) bigger than the gtx 1080Ti/Titan Xp(GP102) and 3x times bigger than the gtx 1060(GP106). Though if I started talking about quarter cuts all of a sudden I'd confuse you even more and I'm terrible at explaining too… that's why I can't get my point across…

And you should look at the process node too… if wafer A is 50% smaller than wafer B you ideally expect wafer A to have 50% more performance for the same die size…

wafers so far have scaled with inflation for the past decade. There were rumors that this year they'd get more expensive on top of inflation, but so far no info has been given if it's true and if so by how much.

A wafer with X amount of 40nm gates would cost the same as a wafer with 2 times X 20nm gates of the same complexity and size and the complexity of GPUs and the size of the wafers has remained the same since Nvidia switched to CUDA and AMD from VLIW to SIMD.

So the bigger the chip, the less fit per wafer, the more it costs per chip.

It's honesty just a marketing gimmick for consumers, and I think we've gone way too much into it. High end on a customer side and high end on a technical/ manufacturing side are 2 different things.Really though why are we still arguing about this, it makes no difference what tier you put it in, as long as the performance of the GPU is good enough for you then who cares if it's high/low/mid/entry/extreme ect…. Your recent explanation is interesting, I've never really looked into stuff like that before, I'll have to research it a little.

We should care, because Nvidia is/was selling the gtx 1080 at 700$ at launch, which was a 116% profit margin in 2016, whereas the equivalent of the gtx 1080 from 2011 the gtx 560 was sold for 200$(254$ at launch) at launch and for its time did much better and was a much more expensive GPU to produce and R&D than the gtx 1080…

The GTX 1080 should NOT have costed more than 300-330$ MSRP at launch in 2016, 350$ tops, but the ignorance of people and thanks to those damn reviewers and the fact that Nvidia called it an "XX80" GPU fooled people into thinking that 700$(or even 600$) is a correct price for such a small chip.

"People" also includes your mighty self, I must highlight. If you have all the answers to human problems then I do urge you to go into politics and change the world. Otherwise - sit there behind your keyboard and don't sweat it.

Ok we can agree on something, GPUs are overpriced, and I see what you mean, in terms of what Nvidia can produce the 1080 Ti is only in the mid-ranged (I did some shower-thinking), however they never did produce anything higher for that series. But the 1080 is much more powerful than say a 980, there was quite a large performance jump between the 2 series, and unless you do a lot of deep research no ones going to take notice in the die sizes or anything else like that. Well all I can can is I hope the GTX 11XX have a die size to satisfy your needs.

Well that is true they didn't, just like AMD back in the day of the hd 4000 series didn't produce anything higher than a mid-range GPU -> HD4870 that was only 20% slower than the gtx 280 which was/is a high-end GPU with around a 600mm^2 die size and costed appropriately 650$($775.98 in 2018), but AMD didn't price the hd 4870 at 650$ or 500$, they priced it at 300$(358$ in 2018) and the die size was 256mm^2, considering that at the time wafers were more expensive and so was Vram, especially Vram. Nowadays an 8GB GDDR5(X) module is 11.5$ to AMD and Nvidia, back then for 1GB it was 40-50$

And why didn't they do it? Because they would have been slaughtered… back then reviewers kept an eye on the die size and everything… and the same amount of people that were aware then are aware now, the difference is now there are a lot more people buying… a lot more ignorant people.