PMO advised HRD ministry to include elements of a Sainik school in regular school

The inclusion of elements will incorporate “holistic development” of students

Kendriya Vidyalayas and Jawahar Navodaya Vidyalayas will have Sainik school like features

July 21, 2017: The Prime Minister’s Office (PMO) has advised the HRD Ministry to include elements of a military school in regular schools too. This will incorporate discipline, physical fitness, and patriotism in non-military schools. The PMO has suggested the inclusion of such elements in all schools for “holistic development” of students. According to a report, the meeting on Tuesday, July 18 was called out by the PMO to discuss the proposal with Senior HRD officials.

The idea of introducing military elements in schools was first introduced under the NDA government at a meeting of the Central Advisory Board of Education (highest government advisory body on education) held in October last year. In the meeting, Mahendra Nath Pandey, Minister of state for HRD accentuated the importance of military education for students to promote the idea of patriotism and nationalism, mentioned Indian Express report.

He further adds, if 2,000 of the 10,000 students at Nalanda University were trained in military education, they would have foiled “Bakhtiyar Khilji’s plan” to plunder and raze the institute.

Sainik Schools were established in 1961 by the then Defence Minister V K Krishna Menon with the purpose of preparing youngsters for the defense services.

[bctt tweet=”Sainik Schools were established in 1961 to prepare youngsters for the defense services. ” username=”NewsGramdotcom”]

The HRD ministry is exploring new ways to introduce Sainik School like features in Kendriya Vidyalayas (KVs) and Jawahar Navodaya Vidyalayas (JNVs), which are also residential school. The PMO’s suggestion was also discussed with the Central Board of Secondary Education (CBSE).

The study, published on Wednesday by 25 technical and public policy researchers from Cambridge, Oxford and Yale universities along with privacy and military experts, sounded the alarm for the potential misuse of AI by rogue states, criminals and lone-wolf attackers.

AI can cause risk of hackers. Pixabay

The researchers said the malicious use of AI poses imminent threats to digital, physical and political security by allowing for large-scale, finely targeted, highly efficient attacks. The study focuses on plausible developments within five years.

“We all agree there are a lot of positive applications of AI,” Miles Brundage, a research fellow at Oxford’s Future of Humanity Institute. “There was a gap in the literature around the issue of malicious use.”

Artificial intelligence, or AI, involves using computers to perform tasks normally requiring human intelligence, such as making decisions or recognizing text, speech or visual images.

It is considered a powerful force for unlocking all manner of technical possibilities but has become a focus of strident debate over whether the massive automation it enables could result in widespread unemployment and other social dislocations.

AI is a powerful way of unlocking all technical mannerisms.

The 98-page paper cautions that the cost of attacks may be lowered by the use of AI to complete tasks that would otherwise require human labour and expertise. New attacks may arise that would be impractical for humans alone to develop or which exploit the vulnerabilities of AI systems themselves.

It reviews a growing body of academic research about the security risks posed by AI and calls on governments and policy and technical experts to collaborate and defuse these dangers.

The researchers detail the power of AI to generate synthetic images, text and audio to impersonate others online, in order to sway public opinion, noting the threat that authoritarian regimes could deploy such technology.

The report makes a series of recommendations including regulating AI as a dual-use military/commercial technology.

Artificial Intelligence is used to read text and images. Wikimedia Commons

It also asks questions about whether academics and others should rein in what they publish or disclose about new developments in AI until other experts in the field have a chance to study and react to potential dangers they might pose.

“We ultimately ended up with a lot more questions than answers,” Brundage said. VOA