For my first project, I'm going with: "We are the precursors and we still have 150,000 years before we attain demi-god status.

The interstellar twist is really for "other motives" than the typical Sci Fi. Motives I will leave unexplained until I have a prototype for you guys to check out in 1 to 2 years.

Speaking of which, everything must be done differently if one is about to make an enduring civilization in the interstellar scale.

Essentially, you can't have a society that randomly based on whims and quirks of fashion radically changes every law and even the notions of good and evil every 30-100 years, a civilization that by the time it finishes building a 30-year construction project already has decided it wants to tear it down.

Just look at the back and forth with Mars mission, it's ridiculous and farcical.

_________________All scientists across the world work for US Democratic Party

Other substances can be used. Styrofoam for example falls around 0.04. There's a substance called silica aerogel that is about 0.02 (asbestos, which was very popular for a long time because it does not conduct heat well, is a silica variant so the aerogel may have the same industrial use issues).

You could also use various gases. For example, modern double-pane windows use an inert gas between the two layers of glass. The inert gas has low conductivity, so heat that strikes the outside layer of glass does not radiate into your house, and heat within your house does not radiate to the outside.

For that purpose, a ship with a steel interior shell, a stealth painted ceramic exterior, and a layer of dichlorodifluoromethane (conductivity 0.007) in between the two might give you the stealth you seek.

You'd still need a way to dump the internally-generated heat though. Maybe a ship in stealth mode could store it up in a heat sink and then deploy radiator fins to dump the heat when the need for stealth has passed.

If you imagine military entities in the future, say between 150 to 300 years hence, what do you imagine their small arms would comprise?

The large weapons seem fairly well established: rail guns, coil guns, missiles, lasers. But what about your typical infantryman?

On the one hand, the physical composition of the universe (well the tiny "baryonic matter" portion of it, which is all that matters for our purposes) seems fairly well established and perhaps even "complete." I'm aware of predictions for a few extra rows in the periodic table, but those are mostly going to be isotopes created at great energetic expense for research purposes.

Are the physical principles and processes on which small arms are based today, so close to "optimum" that the general themes will continue for the next millenia?

_________________Nero: So what is your challenge?

Anthro: Answer question #2: How do "Climate Change models" mathematically control for the natural forces which caused the Ice Age(s) to come and go . . . repeatedly?

If you imagine military entities in the future, say between 150 to 300 years hence, what do you imagine their small arms would comprise?

The large weapons seem fairly well established: rail guns, coil guns, missiles, lasers. But what about your typical infantryman?

On the one hand, the physical composition of the universe (well the tiny "baryonic matter" portion of it, which is all that matters for our purposes) seems fairly well established and perhaps even "complete." I'm aware of predictions for a few extra rows in the periodic table, but those are mostly going to be isotopes created at great energetic expense for research purposes.

Are the physical principles and processes on which small arms are based today, so close to "optimum" that the general themes will continue for the next millenia?

Major game changers include new nanomaterials and robotics.

I've written an article on it before, but shortly, enveloping a fully sized grown man in armor is a lot more difficult and with a lot more drawbacks than doing the same for a CPU.

And since the suit itself requires it's own power supply, that's not an issue anyway, if a man needs it, it will not be different for a robot.

This doesn't stop us from creating power armor for infantry but I would instead see that it would not be a standard issue thing, instead I can see grunts being replaced by robots one third the size of a grunt while packing armor that makes them tough as nails - there is literally a baseball amount of internal space you need to protect so you can do it really well at this scale. CPUs are also far more heat resistant than humans so robots have all advantages against humans, even against heat (energy) weapons.

The larger war machines, tanks, mechs and such, they are natural places where we can fit a human operator. I can imagine the lowest infantry rank being equivalent to Captain, leading his own robotic platoon of combatants from a tank or mech that moves with the platoon to ensure uninterrupted comms with the robots.

Since we're going to see a lot heavier armored combat units over the next 100 years, another great "ramp up the armor" period, we're also going to see the 5.56 etc. become largely obsolete.

Imagine Afghanistan, if you move in with grunts that on average have 20cm of hitech armor surrounding the core while packing heavy anti-armor weapons such as hi-velocity cannons etc., what are you going to do with the old RPGs and AKs? Nothing.

The robotic grunts are also going to be very agile and maneuverable, simply meaning that they can operate much with the same agility as an unencumbered athlete while having armor and full suite of weapons and sensors.

Due to increases in armor and agility of grunt level troops the human manned heavier tier of combat vehicles will have their own power generation ability which allows these units to simultaneously be large enough to house a human commander, be able to operate energy weapons (rail guns, lasers) and with the increased power output when they're not firing lasers and rail guns they can pump that juice to their electric engines which means that if you had, say, a 4-legged mech, kind of a spider-like configuration, it could make quick dashes to either side to evade incoming fire. Again, nothing out of comic books but something like being able to move it's entire mass one of it's width to either side in 3 seconds and then repeat to either side.

The smaller the robotic grunt is, the easier it is to fry with energy weapons or smash with a rail gun hit. It's hard to evade rail guns because the projectiles are so fast and their flat trajectories make them very accurate.

Every formation will have a number of units equipped with dazzlers. If facing enemy infantry, the dazzler units can simply blind the enemy en masse. If the enemy adopt digital helmets, the dazzlers will help reduce their sensor effectiveness.

In fact combating enemy sensors will become increasingly frequent and the battlefield will be simply too lethal for infantry and it will be cheaper to build a robot than a decent power armor.

Still, don't expect any close combat with melee weapons. The armor will be impervious for such attacks and the weapons everyone is packing will still be far more energetic than any swing impact. Merely, if you're winning in the sensor combat then it means that you have a better chance of hitting the enemy without being hit back.

We will see increasing number of systems designed and able to combat missiles so that non-kinetic missiles will lose their reign over battlefields. They will continue to be useful but they won't be the king of battlefield.

These combined robotic-human lead fully mechanized units will be very fast and can carry a lot of supplies with them allowing for faster and longer penetration deep into enemy territory than ever before. Facing a regular army you can expect to see these mech platoons with their robotic 'armored infantry' locating and assaulting enemy command points and logistics within hours of the attack, increasing the intensity of ground warfare considerably.

Ground warfare will continue to have a significant role - it is harder to detect and engage units on the ground due to terrain and obstacles. Meanwhile anti-air detection and engagement capabilities will be ever more common, with platoon sized elements having their own integrated AA capacity, SPAAG is simply a result from having tanks and mechs with rail guns aiming above the ground.

Strategic bombing will be conducted by ever higher flying unmanned drones the size of B-2's, while removing the cockpit they can run colder and have even superior anti-radiation profiles and far less restrictions of mission length and so.

Operational military will be in ever lesser hands. This doesn't stop one from developing infantry based and effective weapons systems but human infantry are going to once again become light infantry / 'peasant levy' class on the battlefield - used primarily because of their low cost and availability, not their effectiveness. Human infantry will be most useful as population control, policing and such, as well as limited reconnaissance, flank guard etc., and 'tripwire' in the sense that you can position them somewhere and when they get attacked you know there are enemies there.

_________________All scientists across the world work for US Democratic Party

Just totally ballparking . . . I'd say in 100 years, we will have a robotic OS that is about at the 1 year old level, but it will be permanently "brain damaged" and unable to advance past about the 2 year old level.

Hard to say at that point; could be another 100 years before an OS that is approximately a 3 year old is created or there could be some sort of watershed period.

Main problems I see with computers that are expected to behave as well as humans: 1. binary is probably extremely inefficient for modeling "learning." Hell, at this point, they are not even really sure what the irreducible "unit" of information is in vertebrate brains, or even in simpler biological sensory systems. 2. Even if we knew how animal minds really worked, how the fuck do we replicate that with little microscropic rigid latticeworks of copper wire!? It might not even be possible; Rather, it may be necessary that we develop something like bio-computers (amoeba like critters living in a sort of chip)

I'm going to stick with a "Children of the Dead Earth" philosphy on this: no make believe tech, and no I don't believe battlefield competent robots is "on the cusp" of being a real thing. Before it can be trusted, it needs to be far PAST being competent, and at this point, my sense is, most of the "cutting edge" "Artificial Intelligence" are a joke.

ADDIT: what are we up to? 50 years of robotics research? 30 at least?

So they are JUST NOW, barely getting cutting edge prototypes that can just barely WALK. In 10 more years Cassie may be able to walk with confidence, but apart from that she will still probably be as dumb as a box of hammers, and indeed, she doesn't even really appear to be a true "robot" so much as a walking drone operated by a human operator.

No fucking way anything like that has a place as a primary combatant within the next 50 years; grunts (be they of the jihadi or the cowboy variety) will eat its lunch and shit themselves with laughter.

Are there special scenarios where some sort of militarized offshot of a Cassie like robot could be of use in a combat scenario? Absolutely. Send her out to draw fire, check for landmines, reconnoiter, etc., etc.

In 100 years I suppose that might evolve into increasingly sophisticated "drones" (be they aerial or terrestrial), but I just do not see the magical "spark" of intelligence at all in any of the views I have into "AI" or "robotic" or "machine learning."

_________________Nero: So what is your challenge?

Anthro: Answer question #2: How do "Climate Change models" mathematically control for the natural forces which caused the Ice Age(s) to come and go . . . repeatedly?

Part of my work is designing 'AI' systems. Simplistic, yes and that's the whole point of it.

It is easy to overengineer and bloat systems, make them bigger and more expensive.

Practical application of AI doesn't require any kind of sentience. It is simply about knowing what capabilities you want, reducing those capabilities to smallest sub problems possible and then in a sequence solving each of the problem.

Such a system results in an 'AI' that is able to meet it's designed capabilities.

Pathfinding is simple. Mapping of surrounding environment is simple, turning the surrounding environment into meaningful data for pathfinding is first of the points where we start to approach issues - but so it is with humans as well. A human can be tricked just like a robot can be. Neither can see a sufficiently concealed position and so.

Friend or foe is another set of difficulties - as it is for humans. Who is a civilian, who is a friend? These things are challenging for humans as well but we can identify threats reasonably well.

Currently the biggest issues aren't in moving the robot and engaging targets. The biggest issues lie with reliability of IFF, eg. not slaughtering the staff and patients at UN hospital, not shooting down passenger planes and such. Another big issue is with ensuring the robot doesn't fall into enemy hands or get hacked. Remote controlled robots can be hijacked, internally controlled ones will leak the system to enemy if the thing's battery runs out.

There is also the ethical issue of whether we should do as Asimov suggested and have all countries sign into three laws of robotics - because there's potentially a big issue of having a couple of billion autonomous killbots suddenly going rogue, potentially because of being hacked or because the recent OS patch had a bug.

Learning is no longer a serious issue for AI. In fact the follower of AlphaGO was this time not given a library of human games - instead it had to invent it's own plays from simply knowing the rules of the game. The new system didn't carry the baggage of the playstyles humans had invented and came up with it's own conventions and playstyles, soundly beating AlphaGO in 100% of the matches they played.

For aircraft AIs are already far superior than any human pilot can be, because they're reanalyzing the situation by milliseconds while being able to process every possible scenario as a vast tree to always be able to switch from offense to defense etc. for optimal outcome as per mission parameters, humans simply do not have that kind of spatial processing power and situational awareness.

Even so robots themselves have limitations - namely the speed of their motivators, power supply, their protection level, sensors, etc. and including available processing power. They won't be supernatural as portrayed by Hollywood, but it won't be very difficult to outperform your average grunt. Even not outperforming one they have similar advantages as firearms produced - while not initially superior to bows, they were cheaper and troops equipped with them were cheaper - hence you could outperform bow equipped army with firearms for the same amount of money and it was far easier to replace losses as well.

Same for robots. United States is often seen as morally weak as a single human casualty per week will already lead to 'home front defeat'. But this can be argued to only be a feature of minor wars - US has never backed down from a major war. Minor wars aren't that important and there's a case to be made against continuing a military conflict for several decades.

But no one is going to cry over robots - and broken robots can simply be harvested from the battlefield and put back together.

There is nothing unfeasible about combat robots. The only real question is which will become more commonplace - remote controlled ones or autonomous ones? I can tell for certain that even RC ones are going to have autonomous capability for when enemy jams communications so they can safely RTB.

_________________All scientists across the world work for US Democratic Party

Would love if you can provide a bibliography or some links or anything I can read that can convince me.

I see lots of hype.

So an algorithm "beat" a professional board game player. First off, how do we know it wasn't fixed? Seriously? There are billions being tossed around based on the fantastic notions of "artificial intelligence," so why wouldn't someone be motivated to fix such a contest; it serves as a ready and seemingly solid piece of evidence of success.

Show me an algorithm that has not been programmed to play any specific game, which can learn to play a game. THAT would be impressive.

Show me an algorithm that is unbeatable in any given game, and which multiple pros have tried to beat and failed.

You say "AI are already better than pilots" well, what does that mean? Are the cutting edge Air Forces and airlines of the world switching to "AI?" Show me the proof that they "are better" or what you are talking about are results in demos, simulations, tests and the like. Until an "AI" pilot functions in the real world it is all just speculation, and the fact is, that drives investments so one should naturally be skeptical.

As you point out, the main problem with them is that, even when they show competency at a small range of tasks, they are far less predictable than humans, far less resilient (I consider armor or environmental resistance to be irrelevant if a "soldier" can be turned off with a garage door opener or hacked into with a mobile phone and turned against his own side).

As a behavioral scientist, I fundamentally take issue with the insistence on the part of IT guys to continually refer to these things as "Artificial Intelligent," and I believe that this insistence is fundamental to the beguiling effect that the real progress has on thinking and discussions.

Come up with a better, more accurate, less cheesy and misleading name for these damn things then we can discuss sensibly.

A gold fish is "intelligent," it observes, it learns, it grows physically and psychically, it can solve problems, it can find its way around in its environment with great ease, it can detect friend and foe and neutral and food, it can evade with some degree of success, and forage/subsist like a pro, as it proceeds through its develomental sequence, it finds itself unerringly driven to find a mate and either assess or compete for them. It can reproduce, it dies.

As far as I can tell, the "best" "AI" can "observe" to a limited extent. They are incapable of "learning" because all they know how to do is what they were instructed to do by their programmers. Moreover, they take zero interest in anything which they were not instructed to take interest in, they have no curiosity, no needs, no fears, no desires, they are simply algorithms which run as they were instructed. PERIOD. Yes, it is possible to write incredibly complex algorithms which produce a fake semblance of "learning" but as far as I can tell, all of these algorithms are completely close looped, meaning, they are oblivious to any factors which were not considered by their programmers as salient. They are also incapable of discerning the environment with any high degree of fidelity; every single aspect of their discriminatory ability must be programmed IN explicitly or they are oblivious. The goldfish is responsive to a broad range of categories: things that go fast, things that make a broad range of sounds, things inside or outside a certain range of colors/sizes/shapes and it relies on (a) its drive to survive (and the myriad sub-routines that comprises) and (b) its general set of response patterns to gain more knowledge of things it has no familiarity with.

The main problem is that "AI" experts do not know a fucking thing about what actual intelligence is, and seem oblivious to the fact that the software they create cannot even compete with a simple vertebrate.

Show me a "Goldfish" robot that can do even 50% of what a goldfish can do!

_________________Nero: So what is your challenge?

Anthro: Answer question #2: How do "Climate Change models" mathematically control for the natural forces which caused the Ice Age(s) to come and go . . . repeatedly?

There is no known system that integrates all the features required for an autonomous combat unit to function. Remote controlled units are simply human controlled ones. Then there are mixed systems where varying degrees of human orders are taken as parameters for robots, with possibility for scaling human control ranging from "establish perimeter / patrol area" orders to being able to issue orders such as "fire at third window, air burst".

What I am seeing is- We have a neural network demonstrated being able to learn a game with immense amounts of moves. The system doesn't calculate all the moves, instead it actually learns how to operate in an incredibly limited environment to learn and come up with solutions that are approaching optimal and which are by far superior to anything the best human in the world can achieve over decades.

- We have demonstrated that a simple system can manage aerial combat tasks and is far superior to human pilot.

- We have demonstrated multiple systems able to operate legs and tracks and navigate in easy terrain.

It isn't far fetched to simply put the package together over the next 100 years while seeing gradual improvements to each of the subsystems.

Sentient it won't be. But can it fire an integrated antipersonnel or antitank weapon? Sure.

---------Goldfish cannot defeat AlphaGO.

It went against the best human player in the world and crushed him. Goldfish are inferior to humans in intelligence but I am 100% sure that with that headline you can manage to clickbait a million views easy.

_________________All scientists across the world work for US Democratic Party

Autonomous battlefield robots? It sounds like we agree that 100 to 150 years is a stretch, maybe even 1000 years is a stretch.

Agree?-=-=-=-So, if we try to keep our estimates conservative, ala Children of a Dead Earth, and we are talking about . . . lets say the year 2121 (has a ring to it), if I infer where your expertise suggests and where my own gut intuition leads me:

1. Aircraft will still be operated by humans (whether onboard or by remote control), although onboard computers will play an increasing role in operations and combat in particular. In some instances, the onboard machine(s) may have some capacity to operate the machine in the event the human operator is compromised: pilot is unconscious RTB, that sort of thing . . . in fact they have if I recall airliner sized aircraft that can taxi, takeoff, circle, approach, land and taxi home all by themselves without human input so that would seem to be inevitable.

2. Pretty much same with any other vehicle: increasing "partnership" between human operators and their binary-based lackeys.

3. Continued use of remote control drones, both aerial and terrestrial and as in (1) above, increasing capacity for emergency autonomous operation if not virtually complete autonomy within narrow frames of reference (but probably with a human operator sitting and watching a monitor all the time)

4. For land warfare, it seems quite likely that a range of drone/semi-autonomous machines which are purpose built for specific functions will proliferate . . . "Sarge, looks like we've got a sniper kill zone up ahead . . . Roger that, bring up the snooper bot Edwards and send him out there to map all the angles of fire for heat signatures . . ." or scenario like . . . terrorists have hostages on the top floor of a high rise. Pigeon robots are able to take up positions roosted on nearby buildings to provide 360-camera footage of the interior through windows . . . or . . . VanguardBot, simple armored tracked device with cameras and perhaps a gun that can go around corners and either take fire to flush out bad guys or shoot them . . . etc.

5. Grunts will still be essential, although many (or perhaps most?) of them will be increasingly responsible for operating drones/robots/computers than for lugging gear, shooting, operating small arms, and neutralizing threats. Those latter roles will still be essential because there will not likely be a machine that can compete with human cunning in a real combat situation, at least not without a human handler/operator close behind it to prevent it from falling prey to its stupidity or deficiencies.

I'd assume if we extend our prophecy another 100 or 150 years, we might have to agree that "all bets are off" in that . . . it is possible in such a long time frame (maybe even next year for that matter!) that some major milestone in computer science occurs which rivals the creation of binary or the semi-conductors and microprocessors and which makes cutting edge stuff of today look like abacuses. In my opinion, the most promising watershed would be in wetware: computers which are based on biological neural systems instead of crystallography. A single amoeba probably has far more "computing power" at least in some dimensions than today's best hardware, though the obvious problem is: how does one decipher the mechanisms involved and "harness" them into mechanical devices? If "computers" that rivaled the largest high-power machines of today could be "grown" on a microscopic chip and implanted subcutaneously into an organism then we might all "become" robots . . .-=-=-=-=-So with all that said, what are the rifles and other small arms that soldiers in this imaginary future we are concocting going to be shooting!?

Sheeze, you start a thread called "Future Small Arms" and make an off-hand comment on "space craft" and 12 pages later you finally get all the "context" stuff laid out so you can actually talk about "small arms!"

. . . but another side bar . . . I think that "cyro" technology is also going to have a huge impact!

_________________Nero: So what is your challenge?

Anthro: Answer question #2: How do "Climate Change models" mathematically control for the natural forces which caused the Ice Age(s) to come and go . . . repeatedly?

Who is online

Users browsing this forum: No registered users and 5 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum