If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Whoa, appears this vulnerability has the major players scrambling to repair the bug.

How to protect your PC against the major ‘Meltdown’ CPU security flaw
Only Intel machines are affected by Meltdown
By Tom Warren

Details have emerged on two major processor security flaws this week, and the industry is scrambling to issue fixes and secure machines for customers. Dubbed “Meltdown” and “Spectre,” the flaws affect nearly every device made in the past 20 years. The Meltdown flaw only affects Intel processors, and researchers have already released proof of concept code that could lead to attacks using Meltdown.

The vulnerabilities allow an attacker to compromise the privileged memory of a processor by exploiting the way processes run in parallel. They also allow an attacker to use JavaScript code running in a browser to access memory in the attacker’s process. That memory content could contain key strokes, passwords, and other valuable information. Researchers are already showing how easy this attack works on Linux machines, but Microsoft says it has “not received any information to indicate that these vulnerabilities have been used to attack customers at this time.”

Protecting a Windows PC is complicated right now, and there’s still a lot of unknowns. Microsoft, Google, and Mozilla are all issuing patches for their browsers as a first line of defence. Firefox 57 (the latest) includes a fix, as do the latest versions of Internet Explorer and Edge for Windows 10. Google says it will roll out a fix with Chrome 64 which is due to be released on January 23rd. Apple has not commented on how it plans to fix its Safari browser or even macOS. Chrome, Edge, and Firefox users on Windows won’t really need to do much apart from accept the automatic updates to ensure they’re protected at the basic browser level.

For Windows itself, this is where things get messy. Microsoft has issued an emergency security patch through Windows Update, but if you’re running third-party anti-virus software then it’s possible you won’t see that patch yet. Security researchers are attempting to compile a list of anti-virus software that’s supported, but it’s a bit of mess to say the least.

A firmware update from Intel is also required for additional hardware protection, and those will be distributed separately by OEMs. It’s up to OEMs to release the relevant Intel firmware updates, and support information for those can be found at each OEM support website. If you built your own PC you’ll need to check with your OEM part suppliers for potential fixes. Story Continues

Update to the latest version of Chrome (on January 23rd) or Firefox 57 if you use either browser
Check Windows Update and ensure KB4056892 is installed for Windows 10*
Check your PC OEM website for support information and firmware updates and apply any immediately

*The update was NOT installed on my system, I had to search Microsoft, download and install. WIN10 only, a few different flavors be sure you get the right one.

The aim of an argument or discussion should not be victory, but
progress. -- Joseph JoubertAttachment 1008

Intel has been hit with at least three class-action lawsuits over the major processor vulnerabilities revealed this week.

The flaws, called Meltdown and Spectre, exist within virtually all modern processors and could allow hackers to steal sensitive data although no data breaches have been reported yet. While Spectre affects processors made by a variety of firms, Meltdown appears to primarily affect Intel processors made since 1995.

Three separate class-action lawsuits have been filed by plaintiffs in California, Oregon and Indiana seeking compensation, with more expected. All three cite the security vulnerability and Intel’s delay in public disclosure from when it was first notified by researchers of the flaws in June. Intel said in a statement it “can confirm it is aware of the class actions but as these proceedings are ongoing, it would be inappropriate to comment”.

The plaintiffs also cite the alleged computer slowdown that will be caused by the fixes needed to address the security concerns, which Intel disputes is a major factor. “Contrary to some reports, any performance impacts are workload-dependent, and, for the average computer user, should not be significant and will be mitigated over time,” Intel said in an earlier statement. Story Continues

The aim of an argument or discussion should not be victory, but
progress. -- Joseph JoubertAttachment 1008

While I really do appreciate an expert like Linus holding Intel's feet to the fire, I kind of hope he is wrong. I've been installing multiple FIRMWARE (Management Engine) patches to client's HP workstations non-stop for many days now. Hate to think it was busy work.

Linus Torvalds is not happy about the patches that Intel has developed to protect the Linux kernel from the Spectre and Linux flaws.

In a posting on the Linux kernel mailing list, the Linux creator criticised differences in the way that Intel approached patches for the Meltdown and Spectre flaws. He said of the patches: "They do literally insane things. They do things that do not make sense."

Torvalds added: "And I really don't want to see these garbage patches just mindlessly sent around."

Spectre and Meltdown are design flaws in modern CPUs which could allow hackers to get around system protections on a wide range of PCs, servers, and smartphones, allowing attackers to access data including passwords, from memory. Since the flaws were discovered, the tech industry has been scrambling to fix them before they can be exploited.

However, others on the mailing list took a different view: "Certainly it's a nasty hack, but hey -- the world was on fire and in the end we didn't have to just turn the datacentres off and go back to goat farming, so it's not all bad," said one.

It's not the first time the Linux chief has criticised Intel's approach to the Spectre and Meltdown flaws. Earlier this month, he said: "I think somebody inside of Intel needs to really take a long hard look at their CPU's, and actually admit that they have issues instead of writing PR blurbs that say that everything works as designed." Story Continues

The aim of an argument or discussion should not be victory, but
progress. -- Joseph JoubertAttachment 1008

Like politicians don't say bad enough things already, now we have to wonder if they every said that stuff at all.

How The Wall Street Journal is preparing its journalists to detect deepfakes

“We have seen this rapid rise in deep learning technology and the question is: Is that going to keep going, or is it plateauing? What’s going to happen next?”

By Francesco Marconi and Till Daldrup

Artificial intelligence is fueling the next phase of misinformation. The new type of synthetic media known as deepfakes poses major challenges for newsrooms when it comes to verification. This content is indeed difficult to track: Can you tell which of the images below is a fake?

We at The Wall Street Journal are taking this threat seriously and have launched an internal deepfakes task force led by the Ethics & Standards and the Research & Development teams. This group, the WSJ Media Forensics Committee, is comprised of video, photo, visuals, research, platform, and news editors who have been trained in deepfake detection. Beyond this core effort, we’re hosting training seminars with reporters, developing newsroom guides, and collaborating with academic institutions such as Cornell Tech to identify ways technology can be used to combat this problem.

“Raising awareness in the newsroom about the latest technology is critical,” said Christine Glancey, a deputy editor on the Ethics & Standards team who spearheaded the forensics committee. “We don’t know where future deepfakes might surface so we want all eyes watching out for disinformation.”

Here’s an overview for journalists of the insights we’ve gained and the practices we’re using around deepfakes.
How are most deepfakes created?

The production of most deepfakes is based on a machine learning technique called “generative adversarial networks,” or GANs. This approach can be used by forgers to swap the faces of two people — for example, those of a politician and an actor. The algorithm looks for instances where both individuals showcase similar expressions and facial positioning. In the background, artificial intelligence algorithms are looking for the best match to juxtapose both faces.

Because research about GANs and other approaches to machine learning is publicly available, the ability to generate deepfakes is spreading. Open source software already enables anyone with some technical knowledge and a powerful-enough graphics card to create a deepfake.

Some academic institutions such as New York University are taking unique approaches to media literacy. One class at the Interactive Telecommunications Program (ITP) at NYU Tisch — “Faking the News” — exposes students to the dangers of deepfakes by teaching them how to forge content using AI techniques. “Studying this technology helps us not only understand the potential implications but also the limitations,” said Chloe Marten, a product manager at Dow Jones and master’s candidate who enrolled in the NYU class.
Techniques used to create deepfakes

Deepfake creators can use a variety of techniques. Here are a few:

Faceswap: An algorithm can seamlessly insert the face of a person into a target video. This technique could be used to place a person’s face on an actor’s body and put them in situations that they were never really in.

Lip sync: Forgers can graft a lip-syncing mouth onto someone else’s face. Combining the footage with new audio could make it look like they are saying things they are not.

Facial reenactment: Forgers can transfer facial expressions from one person into another video. With this technique, researchers can toy with a person’s appearance and make them seem disgusted, angry, or surprised.

Motion transfer: Researchers have also discovered how to transfer the body movements of a person in a source video to a person in a target video. For instance, they can capture the motions of a dancer and make target actors move in the same way. In collaboration with researchers at the University of California, Berkeley, Journal correspondent Jason Bellini tried this technique out for himself and ended up dancing like Bruno Mars.

Journalists have an important role in informing the public about the dangers and challenges of artificial intelligence technology. Reporting on these issues is a way to raise awareness and inform the public.

From “Deepfake Videos Are Getting Real and That’s a Problem,” The Wall Street Journal, October 15, 2018.
How can you detect deepfakes

We’re working on solutions and testing new tools that can help detect or prevent forged media. Across the industry, news organizations can consider multiple approaches to help authenticate media if they suspect alterations.

“There are technical ways to check if the footage has been altered, such as going through it frame by frame in a video editing program to look for any unnatural shapes and added elements, or doing a reverse image search,” said Natalia V. Osipova, a senior video journalist at the Journal. But the best option is often traditional reporting: “Reach out to the source and the subject directly, and use your editorial judgment.”
Examining the source

If someone has sent in suspicious footage, a good first step is to try to contact the source. How did that person obtain it? Where and when was it filmed? Getting as much information as possible, asking for further proof of the claims, and then verifying is key.

If the video is online and the uploader is unknown, other questions are worth exploring: Who allegedly filmed the footage? Who published and shared it, and with whom? Checking the metadata of the video or image with tools like InVID or other metadata viewers can provide answers.

In addition to going through this process internally, we collaborate with content verification organizations such as Storyful and the Associated Press. This is a fast-moving landscape with emerging solutions appearing regularly in the market. For example, new tools including TruePic and Serelay use blockchain to authenticate photos. Regardless of the technology used, the humans in the newsroom are at the center of the process.

“Technology alone will not solve the problem,” said Rajiv Pant, chief technology officer at the Journal. “The way to combat deepfakes is to augment humans with artificial intelligence tools.”
Finding older versions of the footage

Deepfakes are often based on footage that is already available online. Reverse image search engines like Tineye or Google Image Search are useful to find possible older versions of the video to suss out whether an aspect of it was manipulated.
Examining the footage

Editing programs like Final Cut enable journalists to slow footage down, zoom the image, and look at it frame by frame or pause multiple times. This helps reveal obvious glitches: glimmering and fuzziness around the mouth or face, unnatural lighting or movements, and differences between skin tones are telltale signs of a deepfake.

As an experiment, here are some glitches the Journal’s forensics team found during a training session using footage of Barack Obama created by video producers at BuzzFeed.

The box-like shapes around the teeth reveal that this is a picture stitched onto the original footage.

Unnatural movements like a shifting chin and growing neck show that the footage is faked.

In addition to these facial details, there might also be small edits in the foreground or background of the footage. Does it seem like an object was inserted or deleted into a scene that might change the context of the video (e.g. a weapon, a symbol, a person, etc.)? Again, glimmering, fuzziness, and unnatural light can be indicators of faked footage.

In the case of audio, watch out for unnatural intonation, irregular breathing, metallic sounding voices, and obvious edits. These are all hints that the audio may have been generated by artificial intelligence. However, it’s important to note that image artifacts, glitches, and imperfections can also be introduced by video compression. That’s why it is sometimes hard to conclusively determine whether a video has been forged or not.
The democratization of deepfake creation adds to the challenge

A number of companies are creating technologies — often for innocuous reasons — that nonetheless could eventually end up being used to create deepfakes. Some examples:
Object extraction

Adobe is working on Project Cloak, an experimental tool for object removal in video, which makes it easy for users to take people or other details out of the footage. The product could be helpful in motion picture editing. But some experts think that micro-edits like these — the removal of small details in a video — might be even more dangerous than blatant fakes since they are harder to spot.

Weather alteration

There are algorithms for image translation that enable users to alter the weather or time of day in a video, like this example developed by chip manufacturer Nvidia by using generative adversarial networks. These algorithms could be used for post-production of movie scenes shot during days with different weather. But this could be problematic for newsrooms and others, because in order to verify footage and narrow down when videos were filmed, it is common to examine the time of day, weather, position of the sun, and other indicators for clues to inconsistencies.

About time some defensive tools were released to the public. Helps if "we" can protect ourselves.
.The NSA Makes Ghidra, a Powerful Cybersecurity Tool, Open Source
by: Lily Hay Newman

The National Security Agency develops advanced hacking tools in-house for both offense and defense—which you could probably guess even if some notable examples hadn't leaked in recent years. But on Tuesday at the RSA security conference in San Francisco, the agency demonstrated Ghidra, a refined internal tool that it has chosen to open source. And while NSA cybersecurity adviser Rob Joyce called the tool a "contribution to the nation’s cybersecurity community" in announcing it at RSA, it will no doubt be used far beyond the United States.

You can't use Ghidra to hack devices; it's instead a reverse-engineering platform used to take "compiled," deployed software and "decompile" it. In other words, it transforms the ones and zeros that computers understand back into a human-readable structure, logic, and set of commands that
reveal what the software you churn through it does. Reverse engineering is a crucial process for malware analysts and threat intelligence researchers, because it allows them to work backward from software they discover in the wild—like malware being used to carry out attacks—to understand how it works, what its capabilities are, and who wrote it or where it came from. Reverse engineering is also an important way for defenders to check their own code for weaknesses and confirm that it works as intended.Story Continues
.
Note: I went to the NSA Ghidra website, and they referred me to this Ghidra v9 download site. It works.

The aim of an argument or discussion should not be victory, but
progress. -- Joseph JoubertAttachment 1008

About time some defensive tools were released to the public. Helps if "we" can protect ourselves.
.The NSA Makes Ghidra, a Powerful Cybersecurity Tool, Open Source
by: Lily Hay Newman

The National Security Agency develops advanced hacking tools in-house for both offense and defense—which you could probably guess even if some notable examples hadn't leaked in recent years. But on Tuesday at the RSA security conference in San Francisco, the agency demonstrated Ghidra, a refined internal tool that it has chosen to open source. And while NSA cybersecurity adviser Rob Joyce called the tool a "contribution to the nation’s cybersecurity community" in announcing it at RSA, it will no doubt be used far beyond the United States.

You can't use Ghidra to hack devices; it's instead a reverse-engineering platform used to take "compiled," deployed software and "decompile" it. In other words, it transforms the ones and zeros that computers understand back into a human-readable structure, logic, and set of commands that
reveal what the software you churn through it does. Reverse engineering is a crucial process for malware analysts and threat intelligence researchers, because it allows them to work backward from software they discover in the wild—like malware being used to carry out attacks—to understand how it works, what its capabilities are, and who wrote it or where it came from. Reverse engineering is also an important way for defenders to check their own code for weaknesses and confirm that it works as intended.Story Continues
.
Note: I went to the NSA Ghidra website, and they referred me to this Ghidra v9 download site. It works.

Yep, I had downloaded it yesterday already.

An opinion should be the result of thought, not a substitute for it.
- Jef Mallett

Popular browser. Sounds like a CRITICAL flaw. If you use chrome, might want this patch ASAP.

Stop What You're Doing and Update Google Chrome
Google Chrome Security and Desktop Engineering Lead Justin Schuh says users should install the latest version of the browser – 72.0.3626.121 – right away.

By Angela Moscaritolo

Google is urging Chrome users to update the web browser right away to patch a zero-day vulnerability that is being actively exploited.

In a Tuesday tweet, Google Chrome Security and Desktop Engineering Lead Justin Schuh said users should install the latest version of the browser—72.0.3626.121—right away.

"Seriously, update your Chrome installs... like right this minute," he wrote.

Google started rolling out the patch for Chrome on Windows, Mac, and Linux on Friday. This week, Google revealed that the update corrects a "high" severity flaw—CVE-2019-5786—that has been under attack by cybercriminals.

"Google is aware of reports that an exploit for CVE-2019-5786 exists in the wild," the web giant said.

A member of Google's Threat Analysis Group first reported the bug on Feb. 27. At this point, details of the vulnerability are scant, as Google said it's restricting access to bug details until a majority of users have installed the update.Story Continues

The aim of an argument or discussion should not be victory, but
progress. -- Joseph JoubertAttachment 1008