Cyber Caliphate: What Apps Are the Islamic State Using?

As the argument goes, law enforcement agencies must protect the safety of citizens, and to do so, they must be in contact with representatives of the IT sector. This in turn compels the representatives of mail services, messaging apps, and smartphone manufacturers to contact the authorities and disclose user information. However, excesses do occur, and the founder of the Telegram messaging app Pavel Durov refused to provide the FSB with their encryption keys. Telegram was repeatedly accused of being the messaging application of terrorists, and in the context of the messaging service’s being blocked, the discussion surrounding the rights of citizens to engage in private correspondence grew more heated. The example of the Islamic State, however, only goes to show that militants shall not live by Telegram alone: they act much more competently and work to keep a step ahead of law enforcement agencies. What tools do terrorists actually use and how should we fight against the digital technologies of militants?

Different Goals, Different Weapons

The success of Islamic State militants can largely be attributed to brilliant propaganda work. Depending on their goals, militants have been able to resort to various tools for propaganda, recruitment, and communication between group members. Propaganda includes all the usual tools: videos, online magazines, radio stations, brochures, and posters designed for both Arabic and Western audiences.

Western services have played a cruel joke on Western society, facilitating the distribution of propaganda videos like, for example, one of the most popular clips, “Salil as-savarim” (The Sound of Swords) on YouTube and Twitter as well as through file sharing services such as archive.org and justpaste.it. YouTube administrators repeatedly deleted the videos, but they were simply uploaded once again from new accounts with the number of views driven up by reposting them on Twitter. The use of Twitter for these purposes is discussed in detail in the article “Twitter and Jihad: the Communication Strategy of ISIS”, published in 2015. According to the former national security adviser of Iraq, Mowaffak al-Rubaie, it was in large part thanks to Twitter and Facebook that 30,000 Iraqi soldiers lay down their weapons, removed their uniforms, and abandoned Mosul to jihadis without a fight in 2014. [1].

ISIS has taken into account the mistakes of its jihadi predecessors and has skilfully set its own propaganda up against attempts by the foreign press to portray it in a negative light. However, on a deeper, internal level, militants employ other communication tools more reliable than social networks.

Anonymous Networks

In September 2017, political scientist and member of the non-profit RAND Corporation and the International Centre for Counter-Terrorism at The Hague, Colin P. Clarke suggested that ISIS would most likely continue to use encrypted messaging to organize direct terrorist attacks abroad even if the caliphate were to become a “less centralized entity”.

However, terrorists have already resorted to using such tools for some time now. In early 2015, it became known that ISIS had developed a 34-page manual on securing communications. The document, based on a Kuwaiti firm’s manual on cybersecurity, popped up in jihadi forums. The document also listed those applications considered most suitable for use, such as Mappr, a tool for changing the location of a person in photographs. The Avast SecureLine application facilitates the achievement of similar goals, masking the user’s real IP address by specifying, for example, an access point in South Africa or Argentina in place of, say, Syria.

Jihadis have advised using non-American companies such as Hushmail and ProtonMail for email correspondence. Hushmail CEO Ben Cutler acknowledged in comments to Tech Insider that the company had been featured in the manual, but added that “It is widely known that we cooperate fully and expeditiously with authorities pursuing evidence via valid legal channels”. In turn, CEO of Proton Technologies AG Andy Yen mentioned that besides ProtonMail, terrorists likewise made use of Twitter, mobile phones, and rental cars. “We couldn’t possibly ban everything that ISIS uses without disrupting democracy and our way of life,” he emphasized.

For telephone calls, the manual recommended the use of such services as the German CryptoPhone and BlackPhone, which guarantee secure message and voice communications. FireChat, Tin Can and The Serval Project provide communication even without access to the Internet, for example, by using Bluetooth. The programs recommended by terrorists for encrypting files are VeraCrypt and TrueCrypt. The CEO of Idrix (the maker of VeraCrypt) Mounir Idrassi admitted that “Unfortunately, encryption software like VeraCrypt has been and will always be used by bad guys to hide their data”. Finally, the document makes mention of Pavel Durov’s messaging system, Telegram.

It was a massive information campaign that saw Telegram branded with the unofficial stamp of messaging app of terrorists. Foreign politicians played their part. Three days before the attack on the Berlin Christmas Fair in December 2016, senior members of the House Foreign Affairs Committee urged Durov to immediately take steps to block ISIS content, warning him that terrorists were using the platform not only for propaganda, but also to coordinate attacks. Moreover, Michael Smith, an advisor to the US Congress and co-founder of Kronos Advisory, claims that Al-Qaeda also used Telegram to communicate with journalists and spread news to its followers. Against this backdrop, Telegram reported on the blocking of 78 channels used by terrorists. It was this interest and pressure from the authorities that ultimately caused the militants to seek a replacement for this messaging service.

Monitoring Safety

Telegram representatives have repeatedly claimed that their messaging service is the safest in the world thanks to the use of end-to-end encryption. However, this is at very least doublespeak: end-to-end encryption is used only in secret chat rooms and even then possesses obvious shortcomings, as pointed out by Sergey Zapechnikov and Polina Kozhukhova in their article On the Cryptographic Resistance of End-to-End Secure Connections in the WhatsApp and Telegram Messaging Applications [2]. In particular, due to the vulnerability of the SS7 network, which manifests itself through authorization via SMS, it is possible to access chats. Secret chats cannot be hacked, but you can initiate any chat on behalf of the victim. Secondly, developers violated one of the main principles of cryptography – not to invent new protocols independently if protocols with proven resistance assessments that solve the same tasks already exist. Thirdly, the use of the usual Diffie–Hellman numerical protocol and the lack of metadata security, so that you can track message transfer on the server, add any number from the messaging service’s client to the address book, and find out the time a person came online.

In this context, WhatsApp seems more reliable since it uses end-to-end encryption for all chats and generates a shared secret key using the Diffie–Hellman protocol on elliptical curves. Many terrorists have recourse to this messenger. In May 2015, in “The Life of Muhajirun”, the blog of a woman writing about her and her husband’s trip to Germany, the author wrote about how her husband contacted smugglers by WhatsApp while in Turkey.

In the article Hacking ISIS: How to Destroy the Cyber JihadMalcolm W Nance; Chris Sampson; Ali H Soufan, the authors recount the story of Abderrahim Moutaharrik, who planned an attack on a Milan synagogue with the intent of fleeing afterwards to Syria. He used WhatsApp to coordinate the attack. Italian police were able to identify the criminal after an audio message was sent.

However, jihadis are skeptical about WhatsApp, and not only for reasons of security. In January 2016, a supporter of jihad, security expert Al-Habir al-Takni, published a survey of 33 applications for smartphones, separating them into “safe”, “moderately safe”, and “unreliable”. WhatsApp ended up at the bottom of the rating. In defence of his opinion, the expert mentioned that the messaging service had been purchased by the Israeli Company Facebook (WhatsAppwas bought by Mark Zuckerberg in 2014 for $19 billion, the messenger has 1 billion users worldwide).

In the light of complaints about Telegram and WhatsApp and as laws are tightened, terrorists have become preoccupied with the creation of their own application. In January 2016, the Ghost Group, which specializes in the fight against terrorism, uncovered online an instant messaging service created by militants, Alrawi. This Android application cannot be downloaded on Google Play – it is only available on the Dark Web. Alrawihas come to take the place of Amaq — a messaging service providing access to news and propaganda videos, including videos of executions and videos from the battlefield. Unlike Amaq, Alrawi possesses complete encryption. The Ghost report noted that after American drone strikes destroyed the prominent cybersecurity specialist Junaid Hussain in the summer of 2015, the cyber caliphate’s effectiveness declined dramatically. “They currently pose little threat to Western society in terms of data breaches, however that is subject to change at any time” a spokesperson for the hacker group said in a conversation with Newsweek.

The Game to Get Ahead

Jihadis, like hackers, are often a step ahead of the authorities and in tune with the latest technological innovations. Gabriel Weimann, a professor at the University of Haifa in Israel and the world’s foremost researcher of Internet extremism, noted that terrorist groups tend to be the first users of new online platforms and services. As social media companies lag behind in the fight against extremism on their platforms, terrorist groups become more experienced in modifying their own communication strategies. “The learning curve is now very fast, once it took them years to adapt to a new platform or a new media. Now they do it within months,” said G. Weimann.

These words can be confirmed: every popular service, like WhatsApp or Telegram, has alternatives that jihadis are more than willing to make use of. In the above-mentioned article Hacking ISIS: How to Destroy the Cyber Jihad, the authors list dozens of other services jihadis utilize. For example, Edward Snowden’s favourite application, Signal, has open source code, reliably encrypts information, and allows you to exchange messages and calls with subscribers from your phone book. Signal is community sponsored through grants. According to Indian authorities, ISIS member Abu Anas usedSignal as a secure alternative to WhatsApp. Another solution, released in 2014, is the messaging service Wickr, created by a group of cyber security and privacy specialists. It was this application that first made it possible to assign a “life” to a message, ranging from a few minutes to several days. Wickr destroys messages not only on smartphones, telephones, and computers, but also on the servers through which correspondence passes. The program has a function to erase the entire history, and after it has been used messages cannot be restored by any means. Australian Jake Bilardi came across an ISIS recruitment message in Telegram and was to meet with a recruiter through Wickr, though he was detained in time.

Surespot, Viber, Skype and the Swedish messaging system Threema are also mentioned. The latter application deserves to be mentioned on its own — Threema received 6 out of a possible 7 points for security from the Electronic Frontier Foundation (a non-profit human rights organization founded in the U.S. with the aim of protecting, in the era of technology, the rights established in the Constitution and the Declaration of Independence). Jihadis have also called the Silent Circle application a preferred app. After learning of this, the developers tightened security requirements, compelled by the fact that one of the creators, Mike Janke, is a former naval officer. Silent Circle now cooperates with governments and intelligence agencies. Though the list doesn’t end there — Junaid Hussain likewise made use ofSurespot and Kik.

Militants have a great number of communication tools at their disposal in accordance with the goals they happen to be pursuing.

But if applications are primarily used on smartphones, other programs exist for laptops and PCs, readily used by both Information Security specialists and jihadis; for example, the Tor browser or T.A.I.L.S (The Amnesic Incognito Live System), a Debian-based Linux distribution created to provide privacy and anonymity. All outgoing T.A.I.L.S connections are wrapped in the Tor network, and all non-anonymous ones are blocked. The system leaves no trace on the device on which it was used. T.A.I.L.S. was used by Edward Snowden to expose PRISM, the US State Program, the purpose of which was the mass collection of information sent over telecommunication networks.

It can be concluded that militants have a great number of communication tools at their disposal in accordance with the goals they happen to be pursuing. Banning or blocking these tools will not ensure victory over the terrorists, though that is not to say the methods should be abandoned altogether. The best method to employ is that of having agents infiltrate terrorist ranks to ensure constant online and offline monitoring.

The report begins with a horrific story broadcasted on the Russian state-owned “Channel One” in 2014. The story covered how Ukrainian soldiers crucified a child in front of its mother’s eyes. Later, this story was proved to be fake, and there was neither a killed child, nor shocked mother. Still, the story went viral. It had reached a much broader audience on social mediathan it did on television.

The authors refer to that story as “an example of Kremlin-backed disinformation campaign.” The authors of the report continued to state that “in subsequent years, similar tactics would again be unleashed by the Kremlin on other foreign adversaries, including the United States during the lead-up to the 2016 presidential election.”

Undoubtedly, the fake story did a lot of damage to the reputation of Channel One and other state-funded media. It is clear why authors begin with that story — it was poorly done, obviously faked and quickly exposed. However, it showed how effective and powerful social media could be (despite all of the reputation risks). There is also an important point highlighted in the report, particularly that “the use of modern-day disinformation does not start and end with Russia. A growing number of states, in the pursuit of geopolitical ends, are leveraging digital tools and social media networks to spread narratives, distortions, and falsehoods to shape public perceptions and undermine trust in the truth.” We are used to research, dedicated to propaganda and fake news issues, that establishes only Russia is responsible for disinformation and fake news. This report, on the other hand, addresses propaganda and disinformation as a comprehensive problem.

In the introduction, the authors claim that disinformation is a problem that consists of two major factors: technology giants and their impact and the psychological element of how people consume information on the Internet. Technology giants have disrupted disinformation and propaganda, and the proliferation of social media platforms made the information ecosystem vulnerable to foreign, state-sponsored actors. “The intent [of bad foreign actors] is to manipulate popular opinion to sway policy or inhibit action by creating division and blurring the truth among the target population.”

Another important aspect of disinformation highlighted in the report is the abuse of fundamental human biases and behaviour. The report states that “people are not rational consumers of information. They seek swift, reassuring answers and messages that give them a sense of identity and belonging.” The statement is proved by the research showing that, on average, a false story reaches 1 500 people six times more quickly than a factual account. And indeed, conspiracy stories have become something usual these days. We see it has become even more widespread during the current pandemic — 5G towers, Bill Gates and “evil Chinese scientists” who supposedly invented the coronavirus became scapegoats. And there are a lot more paranoid conspiracy stories spreading on the Internet.

What is the solution? Authors do not blame any country, tech giants or the behavior of people. Rather the opposite, they suggest that the solution should be complex: “the problem of disinformation is therefore not one that can be solved through any single solution, whether psychological or technological. An effective response to this challenge requires understanding the converging factors of technology, media, and human behaviours.”

Define the Problem First

What is the difference between fake news and disinformation? How does disinformation differ from misinformation? It is a rather rare occasion that reports give a whole chapter dedicated to terminology. And the report “The Weapons of Mass Distraction” definitely provides readers with a vast theoretical background. Authors admit that there are a lot of definitions, and it is difficult to ascribe the exact parameters to disinformation. However, it states that “misinformation is generally understood as the inadvertent sharing of false information that is not intended to cause harm, just as disinformation is widely defined as the purposeful dissemination of false information.”

Psychological Factors

As it was mentioned in the beginning, authors do not attach labels and do not focus on one side of the problem. A considerable part of the report is dedicated to psychological factors of disinformation. The section helps readers understand behavioural patterns of how humans consume information, why it is easy to fall for a conspiracy theory, and how to use this information to prevent the spread of disinformation.

The findings are surprising. There are several cognitive biases that make disinformation easy to flourish. And the bad news is that there is little we can do about it.

First of all, confirmation bias and selective exposure lead people to prefer information that confirms their preexisting beliefs make information consistent with one’s preexisting beliefs more persuasive. Moreover, confirmation bias and selective exposure work together with other naïve realism that “leads individuals to believe that their perception of reality is the only accurate view and that those who disagree are simply uninformed or irrational.”

In reality, these cognitive biases are widely used by tech giants. That doesn’t mean that there is a conspiracy theory behind it. That means that it is easy for big tech companies to sell their products using so-called “filter bubbles.” Such a bubble is an algorithm that selectively guesses what information a user would like to see based on information about the user, such as location, past click-behaviour and search history. Filter bubbles work well on such websites like YouTube. A Wall Street Journal investigation found that YouTube’s recommendations often lead users to channels that feature conspiracy theories, partisan viewpoints and misleading videos, even when those users haven’t shown interest in such content.

These days, the most popular way to counter misinformation is fact-checking and debunking the false information. In the report, the researchers presented some evidence that the methods we are used to employing, may not be that effective. “Their analysis determined that users are more active in sharing unverified rumours than they are in later sharing that these rumours were either debunked or verified. The veracity of information, therefore, appears to matter little. A related study found that even after individuals were informed that a story had been misrepresented, more than a third still shared the story.”

The other research finding is that “participants who perceived the media and the word “news” negatively were less likely than others to identify a fake headline and less able to distinguish news from opinion or advertising.” Obviously, there is a reason for that. It’s a lack of trust. The public has low trust towards journalists as a source of information about the coronavirus, says the latest research. Additionally, according to the American Press Institute, only 43 per cent of people said they could easily distinguish factual news from opinion in online-only news or social media. Thus, the majority of people can hardly distinguish news from opinions in a time when trust towards journalism is at its historical minimum. It is therefore no surprise that people perceive news that negatively.

This can have implications for news validation. The report states it can differ from country to country. “Tagging social media posts as “verified” may work well in environments where trust in news media is relatively high (such as Spain or Germany), but this approach may be counterproductive in countries where trust in news media is much lower (like Greece).“

A vast research basis also reveals the following essential findings. First, increasing online communities’ exposure to different viewpoints is rather counterproductive. The research presented in the report found that conservative people become more conservative and liberals become more liberal.

Second, the phenomenon called belief perseverance, which is the inability of people to change their minds even after being shown new information, means that facts can matter little in the face of strong social and emotional dynamics.

Third, developing critical thinking skills and increasing media literacy may also be counterproductive or have minimal use. Research shows us that “many consumers of disinformation already perceive themselves as critical thinkers who are challenging the status quo.” Moreover, even debunking false messages cannot be that effective. Showing corrective information did not always reduce the participant’s belief in misinformation. Besides, “consumers of fake news were presented with a fact-check, they almost never read it.”

What can be done here? Authors provide the reader with a roadmap for countering misleading information. Although the roadmap, which is also based on researches, can have very limited use, according to the report.

The main idea is to be proactive. While debunking false messages, developing critical thinking, and other tools have minimal potential, some psychological interventions can help in building resilience against disinformation. Authors compare disinformation and misinformation as a disease, and they propose we need a vaccine that builds resilience to a virus. This strategy means that people should be warned “that they may be exposed to information that challenges their beliefs, before presenting a weakened example of the (mis)information and refuting it.”

Another aspect of the roadmap is showing different perspectives, “which allows people to understand and overcome the cognitive biases that may render them adversarial toward opposing ideas.” According to the authors, this approach should focus less on the content of one’s thoughts and more on their structure. The fact that certain factors can make humans susceptible to disinformation can also be used as part of the solution.

What About the Tech Giants?

The authors admit that social media platforms should be playing a central role to neutralize online disinformation. Despite the fact that tech giants demonstrated their willingness to address disinformation, their incentives are not always prioritized to limit disinformation. Moreover, their incentives are aligned with spreading more of it because of its business model. “Users are more likely to click on or share sensational and inaccurate content; increasing clicks and shares translates into greater advertising revenue. The short-term incentives, therefore, are for the platforms to increase, rather than decrease, the amount of disinformation their users see.”

The technological section of the report is split into three parts dedicated to three tech companies — Facebook, Twitter and Google. While the report focuses on what companies have already done to counter disinformation, we will highlight only the recommendations and challenges that still remain.

Despite all the incentives that have been implemented by Facebook in recent years, the social media platform still remains vulnerable for disinformation. The main vulnerability is behind its messaging apps. WhatsApp has been a great source of disinformation during the Rohingya crisis in 2018 and during the Brazilian presidential elections in the same year. The second vulnerability lies in third-party fact-checking services staffed by human operators. Human operators are struggling to handle the volume of the content: “fake news can easily go viral in the time between its creation and when fact-checkers are able to manually dispute the content and adjust its news feed ranking.”

Despite all the vulnerabilities, including a colossal bot network, Twitter became more influential in countering the threat using such technologies like AI. The question of how proactive the company will be countering the threat still remains. Yet, Twitter now uses best practices, according to the report.

With its video-sharing platform YouTube and ad platform, YouTube might be the most vulnerable platform. The website, with its personalized recommendation algorithm (filter bubbles), has faced strong criticism for reinforcing the viewers’ belief that the conspiracy is, in fact, real. However, YouTube announced in 2019 that it would adjust its algorithms to reduce recommendations of misleading content.

However, it is not just the tech giants who should take responsibility for disinformation. According to the report, it’s countries who should bear the ultimate responsibility for “defending their nations against this kind of disinformation.” Yet, since the situation is still in private hands, what can the government do here?

For example, they could play a more significant role in engaging in regulating social media companies. According to the report, it doesn’t mean total control of social media companies. However, authors admit that this solution may have some implications for possible restriction of freedom of speech and outright censorship, and there is no easy and straightforward way to solve this complex problem.

What can we do about it? According to the report, technology will change, but the problem will not be solved within the next decade. And the fact is, we should learn how to live with the disinformation. At the same time, public policies should focus on mitigating disastrous consequences while maintaining civil liberties, freedom of expression and privacy.

The report provides readers with quite a balanced approach to the problem. While other research projects attach labels on countries or technologies, the authors of the report “Weapons of Mass Distraction” admit the solution will not be easy. It is a complex problem that will require a complex solution.

Related

Engaging with Local Stakeholders to Improve Maritime Security and Governance

Illicit activity in the maritime domain takes place within a complex cultural, physical, and political environment. When dialogue is initiated with a diverse range of stakeholders, policy recommendations can take into account region-specific limitations and opportunities. As noted in the Stable Seas: Sulu and Celebes Seasmaritime security report, sectors like fisheries, coastal welfare, and maritime security are intrinsically linked, making engagement with a diverse range of local stakeholders a necessity. This collaborative approach is essential to devising efficient and sustainable solutions to maritime challenges. Engagement with local stakeholders helps policymakers discover where in these self-reinforcing cycles additional legislation or enforcement would have the greatest positive impact. Political restrictions against pursuing foreign fishing trawlers in Bangladesh, for example, have allowed the trawlers to target recovering populations of hilsa while local artisanal fishers suffer. In the context of the Philippines, the Stable Seas program and the Asia Pacific Pathways to Progress Foundation recently conducted a workshop that highlighted the importance of consistent stakeholder engagement, resulting in a policy brief entitled A Pathway to Policy Change: Improving Philippine Fisheries, Blue Economy, and Maritime Law Enforcement in the Sulu and Celebes Seas.

Physical Environment

Consistent communication with local stakeholders on regional anomalies allows policymakers to modify initiatives to adjust for the physical, cultural, and political context of a maritime issue. The physical environment affects how, where, and why illicit actors operate in the maritime domain. Knowledge held by local stakeholders about uninhabited coastlines, local currents, and the locations of important coastal communities helps policymakers find recognizable patterns in the locations and frequency of maritime incidents. The 36,289 km of coastline in the Philippine archipelago means that almost 60 percent of the country’s municipalities and cities border the sea. The extensive coastline and high levels of maritime traffic make monitoring coastal waters and achieving maritime domain awareness difficult for maritime law enforcement agencies.A Pathway to Policy Change outlines several recommendations by regional experts on ways to improve maritime domain awareness despite limitations imposed by a complex physical environment. The experts deemed collaboration with local government and land-based authorities an important part of addressing the problem. By engaging with stakeholders working in close proximity to maritime areas, policymakers can take into account their detailed knowledge of local environmental factors when determining the method and motive behind illicit activity.

Cultural Environment

Culture shapes how governments respond to non-traditional maritime threats. Competition and rivalry between maritime law enforcement agencies can occur within government structures. A clearer understanding of cultural pressures exerted on community members can help policymakers develop the correct response. Strong ties have been identified between ethnic groups and insurgency recruiting groundsin Mindanao. The Tausug, for instance, tend to fight for the MNLF while the MILF mostly recruits from the Maguindanaons and the Maranao. Without guidance from local stakeholders familiar with cultural norms, correlations could be left unnoticed or the motivations for joining insurgency movements could be misconstrued as being based solely on extremist or separatist ideology. Local stakeholders can offer alternative explanations for behavioral patterns that policymakers need to make accommodations for.

Political Environment

Local stakeholder engagement allows policymakers to work on initiatives that can accommodate limitations imposed by the political environment. Collaboration with local stakeholders can provide information on what government resources, in terms of manpower, capital, and equipment, are available for use. Stakeholders also provide important insights into complex political frameworks that can make straightforward policy implementation difficult. Understanding where resource competition and overlapping jurisdiction exist enables policymakers to formulate more effective initiatives. Despite strong legislation regulating IUU fishing in the Philippines, local stakeholders have pointed out that overlapping jurisdictions have created exploitable gaps in law enforcement. In A Pathway to Policy Change, local experts suggested that the government should lay down an executive order to unify mandates in the fisheries sector to address the issue. Similarly, the Bangsamoro Autonomous Region of Muslim Mindanao (BARMM) is highlighted as a region that heavily influences maritime security in the Sulu and Celebes seas. Working with government officials to understand how policy initiatives need to adjust for the region’s semi-autonomous status ensures maritime issues are properly addressed. BARMM, for instance, issues fishing permits for its own waters in addition to government permits, which can cause inconsistencies. Working alongside local stakeholders allows policymakers to create initiatives that take into account special circumstances within the political system.

Private Sector Engagement

Extending engagement with local stakeholders to the private sector is particularly important during both the policy research and implementation processes. Encouraging private stakeholders to actively help counter illicit activity can help policymakers create a more sustainable and efficient solution to security threats. As A Pathway to Policy Change highlights, private companies already have a strong incentive from abusiness perspective to involve themselves in environmental and social issues. Governments can encourage further involvement of private stakeholders like blue economy businesses and fishers by offering tax breaks and financial compensation for using sustainable business practices and for helping law enforcement agencies gather information on illicit activity. Offering financial rewards to members of the Bantay Dagat program in the Philippines, for example, would encourage more fishers to participate. Governments can also double down on educational programs to raise awareness of important issues threatening local economic stability. By communicating consistently with local stakeholders, policymakers can both more accurately identify maritime security needs and more comprehensively address them.

Conclusion

The unique physical, cultural, and political context in which maritime issues take place makes the knowledge of local stakeholders an invaluable asset. While many important types of information can be collected without working closely with stakeholders, there are also innumerable important aspects of any given context which cannot be quantified and analyzed from afar. Engagement with stakeholders provides a nuanced understanding of more localized and ephemerial factors that affect regional maritime security. Engaging with local stakeholders allows policymakers to capitalize on opportunities and circumvent limitations created by the political, cultural, and physical environment surrounding maritime issues in order to create sustainable, long-term solutions.

Related

Turkey Faced With Revolt Among Its Syrian Proxies Over Libyan Incursion

Relations between Turkey and Syrian armed groups that used to be considered cordial due to massive support provided by the Turkish authorities to the Syrian opposition are rapidly deteriorating over Turkey’s incursion into the Libyan conflict, according to sources among the Syrian militants fighting in Libya.

Last month, over 2,000 fighters defected from Sultan Murad Division, one of the key armed factions serving the Turkish interests in Syria. The group’s members chose to quit after they were ordered to go to Libya to fight on the side of the Turkey-backed Government of National Accord (GNA). This marks a drastic shift in the attitude of the Syrian fighters towards participation in the Libyan conflict: just a few months ago there was no shortage of mercenaries willing to fly to Libya via Turkey for a lucrative compensation of $2,000 – 5,000 and a promise of Turkish citizenship offered by Ankara.

Both promises turned out to be an exaggeration, if not a complete lie. The militants who traveled to Libya got neither the money nor the citizenship and other perks that were promised to them, revealed a fighter of Ahrar al-Sharqiya faction Zein Ahmad. Moreover, he pointed out that after the fighters arrived in Libya they were immediately dispatched to Tripoli, an arena of regular clashes between GNA forces and units of the Libyan National Army despite Turkish promises of tasking them with maintaining security at oil facilities.

Data gathered by the Syrian Observatory for Human Rights shows that around 9,000 members of Turkey-backed Syrian armed factions are currently fighting in Libya, while another 3,500 men are undergoing training in Syria and Turkey preparing for departure. Among them are former members of terror groups such as Al-Qaeda affiliate in Syria Hayat Tahrir al-Sham, as confirmed by reports of capture of a 23-years-old HTS fighter Ibrahim Muhammad Darwish by the LNA forces. Another example is an ISIS terrorist also captured by the LNA who confessed that he was flown in from Syria via Turkey.

By sending the Syrian fighters to Libya Ankara intended to recycle and repurpose these groups for establishing its influence without the risks and consequences of a large-scale military operation involving major expenses and casualties among Turkish military personnel. However, the recent developments on the ground show that this goal was not fully achieved.

The Syrian fighters sustain heavy casualties due to the lack of training and weaponry. Total count of losses among the Turkey-backed groups reached hundreds and continue to grow as GNA and LNA clash with intermittent success. Until Turkey’s President Recep Erdogan curbs his ambition, destructive nature of involvement of the Syrian armed groups in Libya may result in the downfall of Turkey’s influence over the Syrian opposition.