I was at the conference, the next slide listed a bunch of other companies that are planning to use NVIDIA's hardware too. I saw a VW and an Audi in the hall too, with a "powered by Nvidia" sticker. So no, they are just highlighting a new addition to the family of manufacturers that are already using or have decided on NVIDIA DRIVE.

I can't say for the rest of Europe, but at least here in Sweden most of the "big" universities use Linux in their labs, and teach courses that depend on the user running Linux. Most of my professors also use Linux. e.g. at Linköping LiTH, the CS department labs SU all use the Linux Mint distribution, with the added benefit of having a bunch of standard WMs installed too (e.g. i3 & xmonad). The labs are always packed as well :)

Yes. It's based on that proposal. In fact the author of that paper is a co-author of this one. Even better, everyone that has a "terse syntax" paper has in some way co-authored this. This seems like a very strong terse syntax contender in Concepts. This is great, we might get a terse concept syntax i C++20 after all!

Found the other engineer or computer vision scientist! Through the entire gif my mouth was basically hanging open. People really underestimate how hard the problem of occlusion is, especially in real-time.

One problem with these HUDs is that while the flippy buttons are cool, they tend to be very unintuitive (even with occlusion!). That can be applied to all VR/AR GUIs though. Some are better suited than others, but they all suffer from the same disconnection from a real world analog: no haptics. Being able to feel the click of the button, the pressure needed for it to activate, and feel the form of it, makes a world of a difference in usability & intuitiveness.

I've tested some haptics devices in the past, and it's among the coolest techs I've ever tried. If you search online, try looking at videos of 3DSystems Touch. It gives you 6 DOFs of input and output 3 DOFs of force feedback. Essentially, if I create a virtual sphere with micro-sized bumps in it, I'd actually be able to feel those bumps, and also be pushed back by the sphere (as if it were there). You can also simulate other materials by changing the friction and stiffness of the surface (e.g. make it feel like it's made of ice or rubber).

One cool application I've seen with this sort of tech that blew me away, was a program simulating real lockpicking. I have done locksport before, and I can tell you that it's actually quite accurate. You can feel the tension being applied on the lock, and the weight and form of each pin. You can feel the pin that is bound by how different it reacts to the pick. Even the cylinder rotates a bit to show that the pin is set, and makes a nice clicking noise along with the feedback.

Hopefully we'll see more affordable haptics in the future, right now, only one of these devices costs $600...

Wow, nice article, thanks!
But as I can see, this principle says only for two given rays (incoming and outcoming) and says nothing about AMOUNT of rays at all.
One shouldn't imply that it's enough emit a number rays from eyes to reproduce the [exact] picture because the number of incoming rays close to infinite.
AFAIK, physically based rendering works [tries to] as in real life -- emitting incoming rays from lights (in hope some of them will reach the eyes).

If you are talking about regular old ray tracing (Whitted raytracing), then you are indeed correct that it won't simulate all light phenomena. However, Monte-Carlo-based raytracers (like path tracers, bidirectional path tracers, photon mappers and MLTs) will produce an image that is photorealistic when it has fully converged. Some of the methods above do indeed also shoot "light" from the light sources (like BPT and photon mapping) but it is not a requirement to achieve realism. What e.g. path tracing makes use of is that it shoots rays from the eyes, and takes a single path (in a random direction in each intersection if the material is fully diffuse) and then does this several times, and takes the "average" (not really, but something like that) of each iteration. According to the Monte Carlo method, after infinite iterations, it will have converged to the true image of the scene (with all light phenomena taken into account too).

Also, you mentioned physically-based rendering. As far as I know, you don't need a raytracer for a renderer to be physically-based. What defines a PBR is that it uses materials that follow the real world properties (usually by some BRDF). Sure having a good renderer is important for PBR, but not a requirement.

Summarizing this up, it's safe to say, modern (i.e. monte-carlo and such) regular raytracing (i.e. shooting rays from eyes) should produce true image (with all light phenomena taken into account too) and PBR is more about materials and not the way rays are shoot.
Thanks both to you, guys! Seems like I was thinking wrong (being non graphic programmer).

Thanks for making this thread! I'm an exchange student coming to TUM in this semester, I was wondering a bit about public transportation I Munich.

I've (finally) found a place to live, the only problem is that it's in Zolling, a good 30km from Garching IN campus. Are there any good bus routes? And is it viable to use TUM's semester ticket to travel from -> to campus on normal days as well? Anyone with experience on how it is to live in e.g. Zolling and commute to Garching?

This might be a stupid suggestion, but would going by bike also be an option? I have no problem biking for a couple of hours, my main problem is with the roads (do you have roads from and to Zolling with a bike specific section?).

Also, my German is not very good, how are people in Germany with speaking English in case I need to fall back on it? Would this be a problem or not?

Only the second exploit (Spectre) has been proven to work on AMD hardware as well. The first one (Meltdown) only affects Intel and ARM (at least for now, the Flush+Reload cache attack + the out-of-order execution bug still seem to trigger on AMD hardware as well, but the researchers can't reproduce it on AMD hardware as of yet).

As far as I can tell, the Spectre attack is a lot harder to trigger since it needs many more preconditions (that are a lot less likely than Meltdown). It also doesn't have a patch ready, so the patches you are seeing pushed to all major OS:es today don't fix Spectre. They all patch for Meltdown, which is a ARM and Intel exclusive exploit (for now) and don't affect AMD hardware.

Google Zero is right to say that the vulnerability does affect AMD HW (via the Spectre attack), but AMD is also right to say that the patch (which will slow down Intel chips) will not apply to their hardware since that deals with the Meltdown attack. They can thus disable the patch and get by without the performance hit Intel chips get and also get by without being affected by Meltdown.

You can see this in the Spectre Attack website, the whitepapers and the FAQ.

"Variants of this issue are known to affect many modern processors, including certain processors by Intel, AMD and ARM. For a few Intel and AMD CPU models, we have exploits that work against real software. We reported this issue to Intel, AMD and ARM on 2017-06-01 [1]."

...

"A PoC for variant 1 that, ... If the kernel's BPF JIT is enabled (non-default configuration), it also works on the AMD PRO CPU."

According to the FAQ section in the official website, the Meltdown attack does not affect AMD hardware. However, the Spectre attack does indeed seem to affect all microprocessors (and there is no patch for it at this time). It seems that the Spectre attack needs many more preconditions in place, and is thus not not as dangerous or general as the Meltdown attack.

I looked at the Meltdown whitepaper, and it seems that the Flush+Reload cache attack with the OoOE bug still triggers the cache. I still don't get the reason why it doesn't work on AMD hardware though (the researchers at the end of the paper seem equally puzzled). Anyone have any ideas on the underlying reason why the exploit doesn't work on AMD processors?

Perhaps not the most powerful supercomputer in the world, but here it is on Triolith!

I gained limited access to it for a course at Linköping University in Sweden. The program I'm running there is one of the labs in the course, the task was to parallelize a particle simulator using OpenMPI to "verify" the gas law pV = nRT. Their system run a scheduler called SLURM for queuing and executing batch jobs.

If you are interested, you can read more about Triolith and the NSC (National Supercomputer Center) in their homepage.

This was among the first courses they had at the MSc program, so I wanted to give her a second chance. I instead clearly said to her that this isn't something you do here, and you'll only be fooling yourself if you do.

If you do this sort of thing, then you should really think hard about what you're doing at University, and what you or your parents are spending money on. The piece of paper you get at the end isn't very important, but the knowledge you acquire is (companies be damned).

Seems both their website and their research paper are up now. See more at https://www.krackattacks.com, if it hasn't been hugged to death yet that is. The reserach paper can be found over here. This is going to be a fun Monday for every sysadmin in the world...

That is what is done in e.g. the Intel Itanium (if memory serves right), and it's called branch predication (not to be confused with branch prediction). It works well on the Itanium since VLIW-based architectures tend to run a lot cooler than superscalars. If I remember my computer architecture right (that was some time ago), the reason VLIWs tend to run cooler is because there is much less hardware complexity in the instruction dispatch unit, all instructions are scheduled at compile-time. I don't know if more modern VLIWs (if they are still a thing that is) do this anymore though. If anyone is interested in reading more on VLIWs, the course I took has all lectures open to anyone: http://www.ida.liu.se/~TDDI03/lecture-notes/lect9-10.pdf.

As Brian Marick's "How to Misuse Code Coverage" perfectly puts it: "they're useful if they're used to enhance thought, not replace it". The guy writes tools for analyzing coverage for a living, and he has a couple of stories to tell you about its blatant abuse by managers and developers alike.

Not surprisingly, there is also a research paper along with this presentation. Which is by the way (of course...) also typeset in PowerPoint (for those who didn't see the presentation). Author, if you are reading this, this is some damn fine/amusing work!

I must point out, all your comments do sound like you either created the test or work there (hinting a bit on what to do, what doesn't exist etc...). Just an interesting observation (it's probably just my imagination, heh :p).

Yes, indeed. You could probably do this by hand, but it would take a long time and a lot of paper (just like emulating stuff by hand). It's a lot better to just write this down as a program and then load in the data, filling in the blanks and set it off to do the work for you.

Basically, you then just call premake5 (gmake | vs2015 | xcode) to generate the project files to build/. I'm running a Unix-like system, so after generating this I'd call make -j8 -C build/, to build my targets to bin/. I usually have at least two targets, the program itself, and then its respective test suite (using Catch.hpp), maybe some documentation target too? The premake tool is magical, the syntax is gold compared to CMake (albeit a bit more restricted in its feature-set).

Anyone on the fence on writing an emulator for fun, given it a shot! It really is a great learning experience in several areas, especially when writing tests for it too! (like in the mentioned project). However, I would also recommend linking towards CowGod's Awesome Technical Reference. Also, if anyone would like a very quick summary of the specifications, I wrote a very rough document over here (repository also has a complete C++ implementation, though it doesn't have a proper frontend yet, never got around doing it).

Developed a bunch of shitty prototypes I'm glad to have lost, executable and more importantly the source code. However, my first "complete" games must have been Damn Rocks or Tail: TWS, made using the old Macromedia (yes, that old) Flash 8 Professional IDE with Actionscript 3.0. The latter game I was (surprisingly) able to earn some money on, since FGL sold licenses to the highest bidder, which was PegasGames, to which I gave 25% to the composer, Daydream-Anatomy; which made both the game, and the trailer, seem a lot better they it actually were. I could try finding the source code if anyone is interested in scratching their eyeballs out. If I remember correctly, I didn't indent my source code back then... Indeed.

Oh neat! I really should get around learning Rust properly, so far I've only scratched surface by following some quick guides. Some time ago I made a Chip-8 interpreter too, in C/C++ though. A lot of fun and a really good learning experience. I didn't get around to making the graphics, sound and input handling though, I was thinking of using SDL2, but didn't get around to it though :) Good job OP on carrying the entire project through! EDIT: if anyone is interested in learning on how not to write a chip-8 implementation, see my source, sometimes I get nightmares of it. However, the documentation I wrote about the specs might be of use to others, which can be found under 'doc/specification.md'.

I'm hijacking this thread a little bit to ask a very similar question. Using GL 2.1 (targeting old gpus...), I'm making a small game using voxel geometry. Since GL 2.1 doesn't support instancing, I have done batching (I think?), just merging all voxels into the same mesh under setup time. Is this a good way to solve this or are there better ways?

More importantly, with batching, do I really need to duplicate all geometry? That is, positions, normals, texture coordinates, material attributes etc...? Can't I in some way reuse everything else, except for the mesh positions? Since it feels a bit of a waste to upload 3/4 of basically only repeating data (that can't be good for the on-chip gpu cache either, reducing performance by quite a bit).

I'm not sure really, you can try and see if you can do some optimizations using index buffers, but it might be tough as each index is used to fetch all attributes. You can also try to cut some data out maybe, check if you can compute the texture coordinates manually, or if you can skip the normals completely etc.

May I ask why you use 2.1? When you say older GPUs, do you mean very very very old gpus? Even a lot of old gpu's (6-year old ones) support OpenGL 4.4, but if you want to stay on the safe side switching to 3.x would be a huge improvement.
(note my 6-year old gpu GL 4.4 observation is just one gpu (AMD Radeon 5800), but still, I believe a lot of them support at least 3.x)

Thanks for the help! But yea, I'm targeting my old ThinkPad X200S which runs on an old Intel GMA 4500. And as far as I know, it only supports GL 2.1 (open source Intel drivers too). Also, it could be good to know how to do it since a lot of mobile devices are still running GLES 2 (similar to GL2.1).

As many here have pointed out, keeping the default vim bindings compatible is important. That doesn't mean including a few extra bindings via the leader prefix is wrong though. I have tried to make my configuration as minimal as possible, but still make it more modern and pretty with themes like GruvBox and LightLine. Some things I have changed are the features I regard as a bit "old", such as swap files and such (which is fine to have if you are running on a constrained system and opening huge files).

Incredibly nice vimrc though op, I like the idea of not having any plugins. Love the idea of having the mode colors at the top too.

There is the Game Engine Architecture book that is quite good, they also cover ECS if I am not mistaken. The major problem to be solved, and that is partially solved (since dependencies are minimized) by ECS is that of making data as cache friendly as possible and also making it more concurrent friendly. This has become especially important now with the current console generation (eight cores right?).