An Eclipse plugin would be useful but its absence doesn't drive this editor useless especially when I see that I've found no other tool to do this job. Okay, maybe I've forgotten to talk about some not polished editors that require Python and without any binary build and I haven't tested JBE as I fear it supports only Java 1.5.

If it's up to be, why not contributing? You could make a request for enhancement and implement this feature, couldn't you?

I'd love to contribute.... but I don't have the time to work on my own projects, let alone other people's

Though for instrumenting obfuscated code you'd want the code injection points to key off certain other prominent bytecode features, which would be highly dependent upon context; rather hard to fully automate such a feature.

What I'm imagining would be similar to the code creation engines behind GUI builders, but generating bytecode transformations instead of UIs.

I guess no tools like these exist?The closest I've seen is ASM's ASMifier (generates ASM code that'll programmatically recreate a visited class.), though that's a loooong way from what I'm wanting.

Is the player going to be required to accurately read when the bar is nearly empty/full/at a precise level?If so, the UI should give special emphasis to these states, or always present an accurate (possibly numerical) value.

Examples where precise reading is necessary:

Health bar, if health packs are sparse & over-healing is impossible.Shield level, the there is significant mechanical difference between 0 and >0 (e.g. no damage bleed through 'shield break' state change)Ammo clip when reloading destroys unspent ammo.Golf swing'o'meter.Combo-meter, when level determines if a special action is possible/impossible.stamina meter, when level determines if a special action is possible/impossible.etc etc

What your last video seems to place special emphasis upon is the 1/6th boundaries.Partitioning a bar like this is usually done to indicate a significant mechanical change between boundaries.

Maybe I'm confused, but I recall that Java automatically caches a VolatileImage copy of a BufferedImage somewhere if you draw that image a lot, meaning that it is in fact best to use BufferedImages for your regular textures.

Essentially yes, VolatileImages are generally only of relevance for the target of a draw, not a source.Typically of use if you're doing some fancy compositing, or buffering.

- Java is an OO language, thus the term for a unit of code is 'method' not 'function'.- Not wishing to reload the images is not a good reason to make them static.As your program develops, the scope of your resources is likely to change; encapsulating the resources within instanced objects makes refactoring your code to accommodate such changes easier.The typical example of this would be where your design transitions from:a) having a single level, where all resources are held in memory at onceb) to multiple levels, where intermediary loading screens are presented to the user.c) to, perhaps, an infinite world where resources are streamed into memory in the background as the user navigates the world.

- Images returned from ImageIO have been eligible for hardware acceleration for many years; there's no need to copy them to a new image.

It's also worth keeping in mind that every Image in the heap also occupies VRAM for its cached copy used by the h/w accelerated pipeline.Depending upon the h/w rendering pipeline being used by Java2D (which will vary by platform, and cmdline flags), you might be wasting large amounts of VRAM because Java2D is creating power-of-2 textures that poorly fit the dimensions of your Image.While this inefficiency won't increase your heap usage, it might significantly increase vram usage - perhaps to the point where it negatively impacts performance.The solution here is to adopt sprite sheets, along with the associated tooling to generate them, and code to draw them.

For drawing a single pixel onto an accelerated surface, drawLine is faster than fillRect.

However if per pixel stuff is *all* the drawing you're doing, then do it on an unaccelerated surface direct to the image's backing int [] (exposed via DataBufferInt.getData() )

Though without hardware acceleration it's still going to be comparatively very slow; if you want to do stuff at the per pixel level, don't use Java2D.

You might be able to achieve a similar lighting effect while keeping your surfaces accelerated, but you're constrained to the composition rules defined in AlphaComposite. (The useful ones being dst_in for overlaying shadow, and src_over for applying colour). There are tutorials around on how to do this.Though without a means of doing additive blending, you'll eventually hit a wall in what you can achieve.

Frustratingly JavaFX offers several useful (and hardware accelerated) blend modes not available in Java2D(including additive blending), but hides them behind a fundamentally flawed API design rendering them useless for the common use case.

Another limitation that's fairly prominent:if I want to procedurally generate an image (GraphicsContext onto a Canvas seems the only way?), then reuse said generated image multiple places in the scene graph, then I must snapshot it into a WriteableImage?

It makes sense that there's no way of having multiple Canvases in the scene graph share an underlying surface, as that'd break the scene compositing process.

However snapshotting requires the creation of multiple extraneous objects, and significant memory copying; it's slow.Does the JavaFX API really have no performant way of accomplishing this very basic functionality?!?!

Java2D might have been a terribly API, but at least it was reasonably performant & logical once you got around its over-engineered abstraction, and knew what was actually happening under the hood.

It's maddening that all that seems to be requires is:

WriteableImage.getGraphicsContext()

You'd then be able to leverage JavaFX's more feature-rich graphics pipeline, without being burdened by the limitations of the scene graph.I've not been so thoroughly confounded by such a fundamentally flawed API design since I did MIDP development a decade ago.

I've been enticed by the additional blend modes available in JavaFX to have a little dabble with the API.However doing low level stuff seems incredibly clumsy?

Am I correct in thinking that drawing to an image in vram (equivalent of Java2D VolatileImage), and then drawing said image onto another image in vram, is now only possible through composition of canvas elements in the scene graph?

If this is the case, how would the current api facilitate cyclic drawing dependency? e.g. drawing a vram image onto itself? (Or, more efficiently, onto an intermediary surface, then onto itself)

A scene graph is all well and good, but shouldn't the API also expose enough of the underlying rendering pipeline so that you can implement a comparable scene graph in user code if JavaFX's offering is inadequate?

I came to JavaFX expecting a Swing 2.0, but without Canvas exposing its underlying vram surface, the API seems needlessly clunky and restrictive for anything more complex than basic image composition.

Also createScreenCapture works in screen coordinates, not Window coordinates, so your code is assuming the game window is positioned at the origin.For your game that might be true, but in general it's a flawed assumption.

If capturing the desktop is your intent, then your code should not assume the top left of the screen begins at 0,0.It's common for multi-monitor setups to have the primary monitor in the centre (origin: 0,0), and secondary displays to the left (origin: -width,0), and right (origin: +width,0)

Or at the least, make sure you at least have plausible deniability if you do so.

Umm, I don't think this is a great takeaway. The whole point is that programmers have an active role in, and as this sentence proves, a responsibility for, the code they write. Plausible deniability isn't enough.

Absolutely not!

A rigorous development process is there to relieve the programmer of all responsibility for the code they write.The client takes responsibility for the acceptance of the design, and QA takes responsibility for the acceptance of faulty code.

Only when the programmer is wearing multiple hats, can he be accountable for his own code.

The index must be of type int and is popped from the operand stack. If index is less than low or index is greater than high, then a target address is calculated by adding default to the address of the opcode of this tableswitch instruction. Otherwise, the offset at position index - low of the jump table is extracted. The target address is calculated by adding that offset to the address of the opcode of this tableswitch instruction. Execution then continues at the target address.

Though looking at the bytecode generated by javac 8, it appears that negating booleans is done with a branch now!? (I believe it used to be done with an xor)Presumably that's something optimized by the JIT.

It does support the UTF-8 character coding, since all recognized tokens (", :, 0-9, etc.) are also in the ASCII range (highest bit not set). Or in other words: There will not be any byte of the 1-4 bytes in a single UTF-8 character encoding that can be misinterpreted as a recognized token by the parser, since all such bytes use the highest order bits to indicate that there are more bytes to follow. Quite clever of them to design the character coding this way.So you can use any UTF-8 encoded character you like in object key names and string value literals.

Makes sense; so ParseInterface#stringLiteral(int off, int len) is the number of bytes that make up the literal, not necessarily the number of characters.Should make sure the ParseInterface javadoc makes that clear.

Really minor tweak;byte c in isescaped(...) should be an int. (As it is being used as a counter, not intermediary storage of a value copied from input)

....and the method itself should be named isEscaped(...)

I'd also decide whether the class is going to be stateful, or not.At the moment you're storing pos as state, but passing the byte[] & ParseListener around as parameters.

Go with one, or the other; doing both looks messy.Given the aim is minimal weight & dependencies, I'd be inclined to make it stateless so everything can be static.Either way, I doubt it'd have any noticeable impact upon performance.

I'd be inclined to have a reference in the Element, to the List that owns it.

It'd allow fail-fast protection against multiple logic errors (adding an Element to multiple lists/the same list multiple times, removing/moving an element from a List that it's not a member of, etc), making the code far less fragile.

@Abuse, I think all tests should allow laptops and spreadsheets, mostly for the ability to easily save and see intermediate results which is hard in a calculator, and to be able to see the history of working in prior cells' working to spot an error.

Are we imagining different examination techniques?I'm still imagining paper examinations where the calculator simply supplements the written word; it's the responsibility of the student to record their working on the examination paper.

What you're suggesting sounds like a transition to completely digital examination, and while that paradigm shift is in the works, it's yet to become mainstream here in the UK.I do agree though; digitally recording all intermediate working would be a much better way of evaluating a student's level of understanding, but spreadsheets are not the ideal tool for that purpose.

Quote

and regular * not x and / not ÷ symbols which are encountered all over the workplace and in everyday life.

Mathematical operator symbology is indeed a mess, and the pervasiveness of competing Computer Science symbology has only muddied the water further.That's a whole different topic though.

Just to be clear, when I say "teaching programmin" in an elementary school, I expect to make it accordingly, that means no gradle or complex projects, but simple exercises, like moving an character through a 2d tiled-map. Something that also attract their attention and is, somehow, fun.

Teaching programming should be, indeed, teaching how to elaborate and process algorightms

Precisely.

I introduced my 5 year old nephew (mad on Minecraft) to code.org's Minecraft Hour of Code, and he found it incredibly rewarding.Programming (in all its guises) is already prevalent throughout our society, and will be even more so in the future; introducing it early as a 'play thing' is vital for future learning.

On a related note, I always think it's incredibly dumb how students are forced to use such limited calculators in exams. All exams in math related subjects should allow spreadsheets or mathematica or Google or a programming language on a laptop.

At what level are you talking about?

No highschool student would ever need access to such powerful tools to solve the problems they're being examined on.Moreover, having access to such tools would change the focus of the examination; you'd no longer be testing them on their understanding of the topic, but on their ability to use the tools.

Programming is logical problem solving; it's an essential life skill for everyone.It can (and should) be taught to all young children.

Coding on the other hand (and which I suspect is what the OP is actually talking about?), is a far more specialised discipline that not everyone can grasp to the same level.So while you don't anticipate every pupil who is taught coding at school, will go on to use it in their professional life, they will know of its existence & thus have an understanding of how computers & technology work.

The parallel would be the car; everyone uses them, most know how they work, but very few would dare attempt anything but the most superficial of servicing.

Thankfully both programming & coding have been a part of the national curriculum here in the UK for some time, so I guess someone in authority knows where society & technology are headed.

IME having external dependencies in a build is a bit like building your castle on sand. The build must always succeed. Discuss.

So you're advocating the inclusion of external packages with your build, rather than having them pulled from a central repository?

I'm of the opposite opinion; pulling from a central repository minimizes data duplication, and therefore fewer places for mysterious build errors to creep in.Either you get the correct version of the correct library, or you get nothing. Fail fast.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org