Michael Dionne's Creative Museum

Wednesday, November 26, 2014

Please take a look at this new blog about web development: http://9creativelessons.com
I am contributing by writing exclusive articles about my experiments, and giving tricks to web, games and graphics developers :)

Tuesday, August 12, 2014

There's nothing precise to do in the game, so you can just roam around, pick up items and fight stupid & unfinished A.I.; I just had fun implementing many different things together, creating and defining the mood and ambiance as well as integrating and fine-tuning core gameplay mechanics (like ladder climbing, props physics, flashlight & battery, sniper scope, etc.). My goal was limited to simply become more comfortable with the Unity3D engine, by working on a realistic 3D game project.

100% C# code. Graphics are placeholders only. Took about 5 months to code, from scratch. Had to re-think and re-write the whole collisions system logic 3 times in order to get very fluid framerate, even if there are tons of moving things on the screen at the same moment. Today, I could develop the very same game -or a similar one- a LOT faster.

Sunday, December 8, 2013

I cannot find the end or the pattern to/in Pi, however I can prove you there is none.

Being a logical person by nature, I know that things are often much simplier than they appear. We simply need to ask ourselves the right question, in order to get the right result.
I can only laugh and then cry when I see reputed research websites, scientist communities, and even movies like "Pi" (http://www.youtube.com/watch?v=oQ1sZSCz47w) all trying to find an end or a pattern to/in Pi, because their work is all based on a wrong approach, and they all follow the same path.

In fact, the end or pattern to/in Pi cannot be found for many reasons, and I can prove it.

1-For Pi to actually have an end, it needs to have a precise start value in order to be considered relative. Using space (and even time) calculations, you could then find a precise end once reaching the atomic level, but only if you forget the 4th dimension (time), and continue to presume there is no other dimension.

It is logical that if you don't specify how big something is, it can be of any size (infinite vector floating-point precision). Worst, if no size is defined, it shouldn't even exist.
However, if you can define or find its physical size, you can then calculate and find how many times it is bigger than an atom, which is the lowest possible value. Researchers try to find the value of B without defining the value of A in the first place, and they don't even assume A and B are linked and inter-dependant. You need to compare because everything is relative, and a relation always imply at least 2 things. You try to find one thing, but you reject that in order to exist, this thing needs another one. And this other thing is the scale compared to the smallest possible physical system which is the atom.

2-Also, lines (like those virtual lines creating the diameter and the circumference) are on a different dimension from points.

Calculating any point with the expected level of precision requires vector calculation, and thus my first argument is reinforced, considering that without a starting scale value to compare with, there is infinite zooming possibility on a single vector point (dimension zero). Everything must remain relative in order to work, but that is not considered in current researches; you can find the value of a variable only when you compare it with another value, else it remains infinite by logic. A point might visually appear to have some consistent width and height, but in fact a point has none and should not even ever be visually represented, because it is infinite, thus invisible even to the best microscopes. Even at the atomic level, when there is nothing smaller, a point should not appear, because we live in a four-dimensional universe where time (t) is always allowing for better precision. A time value is constantly evolving, thus the time required to even simply calculate it, it is no more existing and should output a false value, until you can predict with infinite precision the time you will need to calculate the time value. In all cases, it is impossible to calculate the precision of a single point, unless you can manipulate the time, and even then, it would be extremely complicated to predict.

3-Any number X divided by number Y and then multiplied by number Y should theorically ALWAYS output a result of X (root value). Example: 1*3 = 3, then 3/3 = 1. It seems to work, isn't it? While this seems to always work, and always work only when using fractions (fractions are working in practice only because they simply represent a quantification of something, instead of quantifying it as a single number), it is also logical to say that 1/3 = 0.3333333333 (infinite 3), then 0.3333333333 (infinite 3)*3 = 1, right? However, this works only with fractions, because using a single number in this valid calculation, the end result is not 1, but 0.999999999 (infinite 9).

So, all the current mathematical system is based on the assumed principe that 1=1, however I just proved you that 1 does not equal 1, but rather 1 = 0.99999999 (infinite 9). If 1 does not equals 1, how can you hope to find any precise result to any formula based on this system? Because, that works only in practice, and not in theory. Everything is wrong. I found this out when I was 16 years old.

Good luck to all stubborn researchers. They will most probably end up drilling their brains just like in the movie if they don't give up and stay hooked. If they are obsessed by something, the only way to have a chance to actually get outside from random and highly-improbable luck is to learn to approach it correctly. You know, just like with women.

What is more important than the precision or the pattern itself, is the fact that we found a formula that is efficient at calculating a circumference value very easily from a diameter value. That should be sufficient for now, since anyway we can already "zoom in" enough to get useful and practical precision for virtually all domains created by humans up to now. Once we will have understood both the space and time dimensions, we should then come back to calculating Pi with a different approach; only then we might find something really useful out from the ultra-precise result in practice.
Research is not linear; it has been proved in the past that advances in one domain can revolutionize one and even many other domains. By finding answers to other questions, maybe the answers we are looking for will get unlocked. Revolutionary discoveries sometimes happen by error, but for errors to happen, there need to be things happening in the first place. While we focus on trying to find an end or pattern to Pi, our brains are not contributing to other problems and domains, and thus we are limiting our own potential to find solutions to some complex and extremely rewarding problems.

I always fought against the society and system in order to keep a lot of "free" time. I hate this english word because free time is hardly really free ($)... In fact when you are not working for an immediate salary, chances are that you will end up loosing some money. But sometimes I feel this is necessary, and more precisely, this is very constructive if you make good use of it. Money can help you buy or build things, but there are things money simply can't buy, like knowledge. To me, knowledge is the most valuable asset, and money is only a tool.Following this philosphosy, my objectives for 2013 are both vague and precise at the same time.My objectives for 2012 were those:-Learning PHP, MySQL and AJAX;-Learning Unity3D (C# and Javascript);-Learning Microsoft Visual Studio Express C#;-Learning advanced UVW mapping in 3DStudioMax;-Learning basic character rigging and animation process in 3DStudioMax;-Learning ZBrush;I was, that year again, very lucky and had enough free time to do and actually complete everything I scheduled, except for learning ZBrush, which I never had the opportunity to explore much yet. In fact, I had to choose between learning ZBrush, or learning Microsoft Visual Studio Express C#, and I decided to learn the latest because I had no projects requiring immediate knowledge of ZBrush, while I wanted to resurrect my old project of a parental control application. I already invested a lot of time and money into that development about 3 years ago, and I evaluated that it might be a profitable opportunity and timing for me to learn Visual Studio all by myself instead of hiring staff to develop my project. And to be honnest, I don't regret my move as of this date!

Wednesday, January 9, 2013

During Christmas holidays, I made the move to install Microsoft Visual Studio C# 2010 Express. As the programming is very, very similar to Unity C#, after 8 intensive days of practicing, I was instantly productive.

Since my Visual Studio 30-days trial period will expire soon, I installed SharpDevelop, an open-source software that is a 100% free alternative to Microsoft Visual Studio. What is very interesting, is that it can open Microsoft Visual Studio projects, so I can still download many source files for Visual Studio tutorials, and I can continue my projects seamlessly. So far, I encountered no shocking issues, except that I would have liked to be able to split the Code and Design viewports in order to see both at the same time. Visual Studio allows this (Vertical Tab Group). But hey, it is free; if you really need a free C# IDE, I still highly recommend SharpDevelop.

Friday, December 7, 2012

A lot of respect and inspiration from somebody I heard so many times but never took the time to know: James Cameron. Passionate, opportunist, curious, leader, creative, serious. I recognize myself in every frame of this video, especially when he talks about team respect. I didn't had the chance yet to live such of an intense team synergy for very long, hope to live this for an extended period of time before my end. Not being understood by our environment at all levels is annoying and litteraly demotivational/counter-productive.

If you expect a fantastic high-resolution and 3D travel under the sea, this is not really this kind of video. This is more about reflexion; the travel is automatically created and permanently-printed in your imagination as Robert speaks. The energy and happiness of this man is really admirable.

Thursday, December 6, 2012

This is without any doubt the most amazing technology and invention to have ever hit the Earth since many, many, many, many years.
If people are interested enough, I am in direct contact with the Director of H2O International; I could accept donations and maybe, with the help of H20 International, we could send a few bottles to a poor village in Burkina Faso. What do you think? Interested in helping me? If yes, contact me personnally at exotyktechnologies@gmail.com and once I get a decent number of donations, we can look forward to buy and ship a few bottles to change the life of many poor peoples in Africa. But maybe, since LifeSaver are already accepting donations on their official website, you should just go up donate over there. http://www.lifesaversystems.com/

If mathematics and more precisely capital market are interesting you, I invite you to see this content-rich video. You will see why all the major brokers are located in New York, and more precisely, why the most successful ones are so close from each others.

Saturday, November 17, 2012

The Nintendo Wii-U will be available tomorrow on the North American market, but I can already foresee a possible application for the screen-enabled gamepad; mounting it on a seperate helmet accessory, so the screen is floating in front of your eyes. Also mount a standard Wiimote controller on it, and you get hand-tracking. Use another standard Wiimote controller in your hand, and if the game is coded properly, you can have seperate motion tracking of your head, and of your arm. The experience could be very similar to what the Oculus will try to achieve on the PC platform, except without 3D stereoscopic graphics (since the Wii-U GamePad's screen does not have any stereoscopic abilities) and lower specs.

Fortaleza glasses will be released for the next XBox console (planned for 2015, most likely to be delayed until 2016). Those glasses are promising an immersive gameplay experience, and I can't do anything else than believe that Microsoft will partially hit their goal. I say partially, because just like what happened for Kinect v.1, there won't be as many developers jumping aboard as wished by the gamers. This story will inevitably repeat a second time, because this is a proven fact that a peripheral that is not being introduced with the system at launch will be provocating userbase fragmentation. And developers want the biggest userbase possible; this is allowing them to forecast an higher sales volume.

But hey, this is a good strategy nonetheless, because Microsoft first need to attract gamers to their console first, and bundling it in a mandatory manner with a high-priced peripheral right at launch would play in nobody's favor. They will introduce Fortaleza glasses somewhere near the console's mid-life (or they will stretch as much as possible to release it for the console after this one), so gamers will be more likely to put more cash on the table (very promising and exciting demos but very few full-lenght games will be showcased), and this will allow Microsoft to re-introduce the technology (a revised model, just like Kinect v.2) by default at their 9th-generation console launch; at that moment, all developers will know that all gamers will have it, and this is when we will start to see a LOT of very interesting titles.

Friday, November 16, 2012

John Carmack, one of the creators of Doom, Quake and Wolfenstein, also reputed to be the "father" of 3D gaming, is supporting the successful Kickstarter project "Oculus", by announcing Doom3 BFG Edition being compatible with it.

Oculus is intended to be a low-cost HMD whose goal is to leverage the PC gaming experience by providing superior immersion, something that has been so far the main pillar of Virtual Reality, and to be honnest, so far the developer edition seems very exciting and promising, with 1280x800px stereoscopic 3D graphics and a motion sensor for head tracking for an anticipated price point of about only 300-400$USD, which would make Oculus one of the cheaper as well as one of the best PC gaming HMDs to ever hit the market. Furthermore, the most interesting and unique aspect of the Oculus is without any doubt its superior FOV (Field Of View) of 120 degrees, instead of 40-45 degrees like all it's competitors, even the Sony HMZ-T1, which is expected to be released in 2013 too at twice the price (799$US). A wider FOV means a better immersion, and a less present "see light at end of the black tunnel" effect.

However, game developers must develop specific extra coding in order to fully support the peripheral. As an example, because of the way they created the wider FOV, there reportedly needs to be FOV correction algorythm at software-level. While developers will obviously need to make the jump sooner or later to high-quality VR, this additional compatibility limitation is definetely not in their advantage right now. Doom3 BFG Edition and Hawken, even if those are very nice games, will most probably not be enough to convince early adopters to invest 300-400$ in this new peripheral -if the price is maintained; else it will be even harder for them.. and due to hardcore gamers complaints that have not even tried the peripheral yet, the Oculus team are already talking about improving the resolution, which will most probably boost the price considerably-. Unless more developers jump aboard, which I really hope so, Oculus may not encounter proper success at launch (2013, if not delayed, which is likely to happen...).

I believe there are a few things to consider, as well as a few ways to make it more interesting and cheaper.

INTERESTING
While 3D stereoscopic graphics are a real plus when talking about immersion, I firmly believe that the most interesting feature is still about the gameplay possibilities head-tracking is technically allowing.
Seperated head-tracking and arm-traking working along together can bring a very rich gameplay experience in First-Person games, like all those First-Person Shooters (Call Of Duty, Battlefield, Medal Of Honor, Ghost Recon Advanced Warfighter, Doom, etc.). In 99% of all FPS games released up to now, the aiming crosshair is always in perfect center of the screen. Imagine being able to detach it from the center of the screen. I mean, imagine looking forward in a direction, and still being able to aim and shoot at something or at an enemy you don't even see on-screen; that is the level of immersion real VR is all about. I was personnally subsided by Quebec government to lead a R&D team in 2004-2005 about this. We used the CryEngine1 (FarCry) to conduct those tests, and we succeeded (well, I succeeded :P). At that moment I was mainly a 2D/3D artist (sometimes freelancing) and had very low programming capabilities. But since then I learned 9 different programming languages intensively (some targetted at gaming development, like C# under Unity3D). I would be perfectly able, today, to code this feature for a game, so I imagine we can expect that a reputed coder like Carmack will understand and develop this, too, for Oculus. He turned down an offer to go program for NASA, after all.

CONCLUSION
That said, I am pretty sure it would be possible to create a very interesting PC gaming HMD that costs as low as 170$-220$USD, and that pricing will really help to democratize VR. Because gamers are tech-savvy, and price point is a determining factor. I think that hard-core gamers who are willing to afford a 600$-1200$ HMD with high-end specs are not conscious enough that democratization of a technology is a key factor for its success; without an interesting userbase, developers don't have the same level of interest in supporting new hardware/technology. Development is expensive, and the risk of bad or even negative ROI (Return Over Investment) is too big. When a HMD manufacturer will understand this and find the sweet spot between innovation, quality, compatibility and price, VR gaming on PC will finally take off in good health. It will also need at least one killer app, and I highly suspect that it won't be related to gaming. Movies will help, but social "telepresence" with some deeply-integrated geolocalization features seems more probable.

Sunday, November 11, 2012

There are a few important facts to highlight for you, the reader, before going any further in speculation:

1-AMD first tried to acquire nVidia, but turned out to acquire ATI instead.

2-Since AMD launched their APUs (CPUs that are embedding a GPU), it is evident that people at AMD are trying to convince the consumers to adopt Radeon graphics, because when an AMD APU is paired with a nVidia video card, the embedded Radeon GPU becomes ineffective. When paired with a Radeon video card and a motherboard that is supporting Crossfire, the embedded GPU power is unlocked and added to the discrete Radeon video card power, creating an instant plus-value for the customer. This means the GPU embedded in the APU can be used as extra horse power for dealing with other things like physics and/or A.I. But developers need to take advantage of this, and nVidia is more likely to release something new before it happens in order to prevent this on the PC platform.

3-While being very inexpensive, AMD APUs are lacking of any GDDR (RAM dedicated to GPU processing), using the standard, but slower, RAM connected to the motherboard. The customer can unlock faster GPU performance by pairing the APU with faster RAM modules. Example, a motherboard that mounts an APU and 2400Mhz DDR3 RAM will throw much faster graphics horsepower than a motherboard that mounts an APU and 1600Mhz DDR3 RAM.

4-nVidia solutions are globally more expensive than equivalent AMD solutions.

5-nVidia Tegra chips for ultra-mobile devices are in fact slightly-modified quad-core ARM CPUs that embeds a nVidia GPU, a northbridge, a southbridge, a memory controller, and a 5th companion core. They are competing with Mali, a graphical chipset family also created by ARM and used in many smartphones, like many Samsung devices.

Now is time for open speculation.

The next XBox console is rumored to rely on a 16-cores IBM ARM CPU coupled with a AMD GPU and custom, lightning-fast RAM memory. This is at least what is rumored to be included in one of the early development kits, but it appears that final specs are often only half those of the devkits, which is bringing the thing down to being (approximately) a 8-cores CPU, which is equivalent to today's high-end technology on the PC platform.

Chances are that the console's AMD GPU will not embed any standard GDDR; it would rather embed very fast RAM in order to compensate for the lack of direct-access to GDDR. But why? GDDR5 is so fast already, why bumping it?

Well, think about it for a minute; if Microsoft can get the RAM to work at least as fast as the GDDR5 technology that is roaming around since a few years already, this would open a HUGE improvement over graphics, since developers would have access to the whole system RAM (which you can expect to be considerable, reaching 8Gb-12Gb) as a standard, instead of a non-standardized maximum of 1Gb or 2Gb like on PC.

Today's PC games, even top-end ones like Crysis3, are designed for using 1Gb of GDDR5, or 2Gb in the very best scenario. But we don't see major improvements in graphics quality between 1Gb and 2Gb because developers are not wasting their time developing ultra-high resolution graphics for something that only 5 - 12% of their userbase will be able to run, even if they have video chipsets powerful enough to render them. They are boosting a few things, like the screen resolution, the special effects resolution (blur, lightmap, etc.) and of course the framerate, but they are not bothering developing new features or graphics because THE TIME AND RESSOURCES INVESTMENTS DOES NOT WORTH THE PAIN, AND BECAUSE NO BIG COMPETITOR WILL DO IT ANYWAY. However this is a whole different story if you approach them with a console that promises to sell millions of units on day one, with very high and standardized specs. It now not only worths the pain, but it gives them no choice to improve if they want to keep the flag in their camp.

AMD will have stronger presence and reputation on the PC platform, because up to now they are providing very interesting price VS performance solutions, but they failed to release any convincing ultra-mobile hardware. Unless they hurry up to release a solid and attractive offering for the ultra-mobile devices scene, they won't be able to catch up the notoriety of nVidia in this field, and will rather keep focusing on growing their presence on the desktop and console scenes. Tablets and smartphones are stealing market shares to laptops every month, and this will continue for the years to come. AMD plans are most probably to become a stronger actor on the desktop and console worlds than they ever had the opportunity to be. This is where they will channel their ressources, offerings and R&D.

NVidia will continue to sit on their reputation for the mass-market desktop scene, just like Intel are doing right now, because nVidia currently have the perfect opportunity to rule the ultra-mobile scene with their Tegra chips. They will continue to compete with AMD on the desktop scene, in order to keep stealing them shares as much as possible, but they will want to do this only in exchange of high profit margins. In other words, they will target a more niche market (smaller territory), but they will defend it with force. Much like Apple, nVidia will play the card of the expensive high-end, and will invest a lot in marketing strategies and corporate identity to justify their higher price points (which may really worth it anyway). You can expect nVidia to announce many new business partnerships in the forthcoming years. I can also imagine them to acquire ARM. Not for direct profits, but rather with the objective to seduce all ARM partners to use their graphics chip (bigger production = lower costs), thus reinforcing their presence on the ultra-mobile scene, and having a stronger edge over Intel and AMD on this quickly-growing scene. If nVidia kills the Mali GPUs in favor of nVidia GPUs, this would almost guaranteed that Intel won't make the move to compete with ARM. Such a move would be very hard for Intel, whose only chance to catch-up on the ARM + nVidia architecture would be by investing many billions of dollars; Intel won't be interested in investing that much if they cannot tackle the ARM + nVidia combo, let alone ARM; they currently need to tackle ARM only and they proved to be unable so far (even with their ATOM CPU), so tackling ARM + nVidia is very unlikely to happen unless Intel find a new way to innovate. This would be a way for nVidia to secure their young but yet expensive venture in ARM architecture, because Intel are reportedly preparing another trial for 2014 and if they succeed, this will hurt not only ARM itself, but also nVidia. But that remains to be seen...

Saturday, September 1, 2012

Me and two friends have renewed with an old habit; meeting together for a Developers Night! We originally called such events "3D Nights", but as I am now more doing fun coding my Unity game in C#, we are giving the name a more general scope :P

Phil, a friend of mine that is working at FunCom Montreal, has been animating characters in Maya.
Remz, on his side, was updating his techniques in 3DStudioMax. I teached him Smoothing Groups and UVW Unwrapping, two things that can quickly become complicated if you have no guide (and once fear is installed, you don't always find the courage to find and follow tutorials all by your own initiative, hence why having a teacher that is pushing you to learn more is a good thing in this world of complex software wizardry and foreign menus!).

Personnally, I was indeed programming my 2D game in Unity3D. I fixed a few bugs and started new features. Along with Remz, we also found many new key features that will enhance the overall gameplay experience by a considerable factor. I think I am reaching my goal with success.