Automatically driving an application via the UI or API to make manual work faster, less burdensome, and more accurate.

Time to give Alex Trebec your answer. If you say “What is test automation?”, you’re correct. But if you say “What is Robotic Process Automation?” you’re also correct. Undeniably, test automation and RPA have a lot in common—for better, or for worse.

What is RPA?While software testing all-too-often remains the overlooked “second-class citizen” of the application development world, RPA has truly captured the attention (and dollars) of IT leaders—to the point where it has become the fastest-growing market in enterprise software.

At its core, RPA is ultimately the same as software test automation. As the Jeopardy! leadoff suggested, both RPA and software test automation automatically drive an application via the UI or API to make manual work faster, less burdensome, and more accurate.

RPA focuses on automating sequences of tasks in production environments to successfully execute a clearly-defined path through a process so you can complete work faster

Test automation focuses on automating realistic business processes in test environments to see where an application fails so you can make informed decisions about whether an application is too risky to release

In other words: with RPA, you use automation to make a process work. For software testing, you focus on using automation to determine how a process can possibly break.

There are two critical automation capabilities required for both RPA and software test automation: UI automation and API automation. At the same time, there are some core differences that must be addressed to enable either software test automation or RPA to succeed at scale in the enterprise. In terms of software test automation, this includes secure and stateful test data management, test-driven service virtualization, change-impact analysis, and risk-based test case design. For RPA, this involves production-grade execution, enterprise-grade security and access control, and comprehensive event triggers.

It’s easy to be enticed by vendor-induced visions of sophisticated cyborg-like robots taking over previously-human-led processes—faster, better, and cheaper. However, that’s quite far from RPA reality. From a technical perspective, RPA bots are really just sets of automation instructions. These instructions are either expressed in the form of scripts (with hard-coded technical details on how to find various element locators on the page) or model-based automation technologies (which define automation from the perspective of business user and store automation details in reusable/rearrangeable automation building blocks). If you’ve ever worked with test automation, these approaches should sound quite familiar.

Moreover, RPA’s strength has proven to be automating short repetitive tasks rather than long-running end-to-end processes. According to Gartner, “the term ‘process’ in the RPA acronym is more accurately discrete ‘task’ automation. Most automations supported by RPA tools last, at most, a couple of seconds. Furthermore, at best, the process support aspect of these products is limited to simplistic workflow.”

Solving age-old automation challengesUltimately, it’s the strength of the underlying automation engine that makes or breaks both RPA and software test automation initiatives. Given that script-based automation approaches have failed to meet enterprise test automation objectives over the past 20 years, it’s unreasonable to expect the same script-based approaches to now meet enterprise RPA objectives. Not surprisingly, the script-based approaches that have yielded poor results in the software test automation world continue to fall short in the RPA sphere—and the resilient model-based approaches that enable high levels of enterprise test automation continue to rise to the top for RPA.

Brittle automation—the same core problem that has doomed so many software test automation initiatives—has already emerged as the #1 enemy to RPA success and ROI. As publications such as The Wall Street Journal and Forbes have been reporting, the problem with RPA is that bots break—a lot. RPA users are realizing what testers learned years ago: if your automation can’t adapt to day-to-day changes in interfaces, data sources and format, underlying business processes, etc., then maintaining the automation is going to rapidly eat into your ROI. Moreover, with RPA, the repercussions of broken automation are much more severe. A test that’s not running is one thing; a real business process that’s not getting completed is another.

RecommendationsThe common heritage of software test automation and RPA can be a blessing, not a curse. Old problems don’t have to be new again with RPA. The key problem with RPA –the construction of sustainable automation—is a skill that most successful testers already possess.

As with test automation initiatives, the success of RPA initiatives ultimately rests on resiliency. It’s essential to find an automation approach that enables business users to rapidly create and update resilient automation for the organization’s core enterprise technology sets (SAP, Salesforce, ServiceNow, Excel, PDF, custom apps, etc.). RPA bots must be resilient to change and bot maintenance must be simple and straightforward…which eliminates the popular script-based approaches. Otherwise, RPA is simply short-lived automation that creates technical debt—leaving the organization at risk when automation fails to execute the required tasks.

According to a recent Gartner keynote, many organizations are finding that software test automation is a great bridge into RPA initiatives, and “it’s important to utilize test automation assets and test automation teams as you build RPA.” There’s a growing trend of organizations entering into RPA by extending their test automation efforts—and those who have successfully conquered the test automation challenge are especially well-poised for success with RPA.

]]>SD Times news digest: Make with Ada programming competition, ARCore updates, and Element AI raises $151 million for operationalizing AIhttps://sdtimes.com/softwaredev/sd-times-news-digest-make-with-ada-programming-competition-arcore-updates-and-element-ai-raises-151-million-for-operationalizing-ai/
Fri, 13 Sep 2019 15:57:52 +0000https://sdtimes.com/?p=36987AdaCore announced the start of its 4th annual “Make with Ada programming competition” with the goal to design and implement an embedded software project in which Ada or SPARK are the primary languages used. “Many programmers tend to discount Ada and SPARK because of the many myths surrounding Ada, including Ada is too complex to … continue reading

]]>AdaCore announced the start of its 4th annual “Make with Ada programming competition” with the goal to design and implement an embedded software project in which Ada or SPARK are the primary languages used.

“Many programmers tend to discount Ada and SPARK because of the many myths surrounding Ada, including Ada is too complex to implement and too complex to learn,” said Bill Wong, one of the competition’s judges. “As contestants of this annual programming competition demonstrate, developing embedded software with Ada is a lot easier than you think, and the benefits (cost savings, quality code, fewer bugs, etc.) are many.”

AdaCore was founded in 1994 and since then it has created software development and verification tools to build applications that were used in railway systems, space systems, commercial avionics, military systems, air traffic management/control, medical devices, and financial services.

The competition runs until January 31, 2020 and offers $8,000 in total prices. The full details are available here.

ARCore updatesGoogle announces updates that include the Augmented Faces API coming now for iOS, improvements to Cloud Anchors such as persistent Cloud Anchors, and Call for Collaborators.

The Augmented Faces API offers a high-quality, 468-point 3D mesh that lets users attach effects to their faces without a depth sensor on their smartphone. The Cloud Anchors API lets devices create a 3D feature map from visual data on which anchors can be placed.

Now, more angles across larger areas in the scene can be captured for a more robust 3D feature map when an anchor is created. The visual data used to create the map is deleted and only anchor IDs are shared with other devices to be resolved.

Element AI raises $151 million for operationalizing AIElement AI announced that they raised about $151 million in series B funding that it will use to accelerate the deployment and commercialization of solutions that meet customer needs for the operationalization of AI, while continuing to develop AI products.

“Operationalizing AI is currently the industry’s toughest challenge, and few companies have been successful at taking proofs-of-concepts out of the lab, embedding them strategically in their operations, and delivering actual business impact,” said Element AI CEO Jean-François (JF) Gagné. “We are proud to be working with our new partners, who understand this challenge well, and to leverage each other’s expertise in taking AI solutions to market.”

“Modern applications are effectively integrations of services, data, transactions and processes from a vast array of resources to deliver innovative, new services. But ESBs and other traditional integration approaches have failed to keep pace,” said Sanjiva Weerawarana, founder and leader of the Ballerina project. “The Ballerina programming language is facilitating a major evolutionary leap in the development of cloud native distributed applications that is tearing down the outdated barriers between app development and integration to enable greater agility, performance and resiliency. And it’s accelerating the ESB’s path to extinction in the process.”

Features of the programming language include:

New abstractions of client objects, services, resource functions and listeners. This is important because it allows developers to bring networking directly into the language and access Fallacies of Distributed Computing

]]>How I learned to love the rebuild: 5 things to consider when your turn comeshttps://sdtimes.com/softwaredev/how-i-learned-to-love-the-rebuild-5-things-to-consider-when-your-turn-comes/
Thu, 12 Sep 2019 17:00:41 +0000https://sdtimes.com/?p=36971There’s a saying about perfectly architected systems: we’ve never heard of them, because those companies never get off the ground. When you first get going, the only thing you should focus on is finding product-market fit. Picking the stack that works right now, fast, is the right choice. Yet, once you reach critical mass, waiting … continue reading

]]>There’s a saying about perfectly architected systems: we’ve never heard of them, because those companies never get off the ground. When you first get going, the only thing you should focus on is finding product-market fit. Picking the stack that works right now, fast, is the right choice. Yet, once you reach critical mass, waiting too long to re-architect can torpedo your reliability and ability to grow.

So, how do you know when it’s time to pause and reinvest in your systems? What are the signs that it is time to break things apart? Knowing what this transition looks like, and accepting that every company that has velocity will eventually get here, will help you understand that retooling your system is a sign of success, not of failure

Two years ago at CircleCI, we fundamentally changed the guts of our product, and there’s no way for that to not be terrifying. It took six months to get this in front of the first customer, and another nine to get to GA. Because we work on and promote continuous integration and delivery, major releases scare us even more than your average software company. So swapping out the core of our entire platform was doubly dangerous: there was the inherently risky business of trying to recreate our product, except better, AND we had to work in the dark for months without the guiding light of constant validation.

So how did we know it was time to rebuild? Our infrastructure’s efficiency was headed towards a local maximum. We could see the future of our own needs and our customers’, but we couldn’t see an incremental path towards that future. The global maximum was on a different hill and we needed to make a leap. We racked our brains, searching for a way to gradually update our platform, but we came up short. Eventually, we reached a grim conclusion: we’d have to embark on a treacherous journey — a complete re-platforming of our infrastructure.

Here’s what we learned along the way.

Before finding product-market fit, optimize for speed, not elegance. Pick a toolset and a process that helps you move as fast as possible, with the assumption that at some point you will have to rewrite everything. Don’t waste time scaling before you need to.

Many startups focus on the wrong things before finding product-market fit They waste time talking about the company they want to build. Instead, focus all of your energy on building the product that will support the company that you want to scale in the future. Once you have that, you can figure out how to build a sustainable path around that.

Fundamental architecture changes don’t happen incrementally. The inherently dangerous path we were on forced us to take a hard look at our principles: of all of the benefits of CI/CD, what could we keep? What would we have to toss out? How could we continue to make progress in other areas of the product while rewriting the core?

We could have been stubborn. But instead, we asked ourselves these questions, broke down our values and rebuilt them as we rebuilt CircleCI. We charged into this project because we knew what would be on the other side: better continuous delivery. And we believe in that philosophy so utterly that it made sense to temporarily embrace an opposing worldview. Ultimately, we had to make a giant leap so we could take small steps again, but towards a higher goal.

Customer feedback is key. And we learned from it. There were design flaws and architecture flaws and decisions that just didn’t even make any sense. But that’s exactly what we needed to know and we are so thankful to have customers who dove in and helped us figure it out.

When you are in the pure startup stage, in the very early days, you need to understand whether you are doing something customers actually want and that the market will accept and ignore everything else.

Constrain your scope. Throughout the process, we did our best to keep our focus narrow. We found appropriate seams to make important modifications while minimizing required changes elsewhere. We also exposed the new flow via branch-level configuration, so we could solicit real feedback without interrupting the day-to-day software delivery of our customers.

Give your team a clear definition of “done.” Admittedly, it was tempting to continue tinkering with the product forever, caught in a vortex of endless perfectionism. But we knew we needed to get back to our core philosophy: continuously delivering value to our customers. So we committed to this upgraded version as our default platform and have since built exciting, new things on top of it.

At the end of the day, it doesn’t matter how good the team is, or how cool the product is, or how elegant your technology is. If customers don’t want it, and the market rejects it, it doesn’t matter. Once you find product-market fit, you can start worrying about other pieces like your technology stack, organizational structure, and whether you have the right people in the right roles.

Andy Rachleff has a famous quote that talks about this: “When a great team meets a lousy market, market wins. When a lousy team meets a great market, market wins. When a great team meets a great market, something special happens.”

]]>SD Times news digest: ScyllaDB’s open-source Amazon DynamoDB-compatible API, Snyk raises $70 million, and Skytap collaborates with Microsofthttps://sdtimes.com/softwaredev/sd-times-news-digest-scylladbs-open-source-amazon-dynamodb-compatible-api-snyk-raises-70-million-and-skytap-collaborates-with-microsoft/
Thu, 12 Sep 2019 15:57:24 +0000https://sdtimes.com/?p=36965ScyllaDB announced the Alternator project, an open-source software that enables application and API-level compatibility between Scylla and Amazon DynamoDB. This will allow users to migrate to an open source database that runs on any cloud platform, on-premise, bare-metal, virtual machines or Kubernetes. “Alternator gives developers the ability to control the number of replicas and the … continue reading

]]>ScyllaDB announced the Alternator project, an open-source software that enables application and API-level compatibility between Scylla and Amazon DynamoDB.

This will allow users to migrate to an open source database that runs on any cloud platform, on-premise, bare-metal, virtual machines or Kubernetes.

“Alternator gives developers the ability to control the number of replicas and the balance of cost vs. redundancy to suit their applications. They can set and change the replica number per data center, the number of zones and the consistency level on a per-query basis,” ScyllaDB wrote in a post.

The company said that it expanded to 300,000 developers worldwide and that the customer base grew 200 percent in 2019.

“After an incredibly exciting year, including revenue exceeding 4x growth and the acquisition of DevSecCon, the new funding will be used to fuel the company’s ambitious growth plans that include further product development, expanding global resources and community investment to bring our developer-first security solutions to even more development teams and enterprise organizations,” Snyk wrote in a blog post.

Skytap announces collaboration with MicrosoftSkytap announced its collaboration with Microsoft to bring its purpose built cloud service for legacy applications, including IBM POWER-based solutions to Azure. The company said it will preview its service on a new class of Microsoft Azure bare metal this year.

“We recognize that enterprises have many critical systems that were not designed with the cloud in mind. Skytap’s ability to migrate and run these applications natively in Microsoft Azure with minimal changes accelerates cloud adoption,” said Eric Lockard, Microsoft corporate vice president for Azure Dedicated.

The service will provide support for heterogeneous application stacks, including native support for POWER workloads running on AIX, IBM i, and Linux. The full details are available here.

CircleCI and Sumo Logic integration offers real-time analytics for SDLC pipelinesCircleCI and Sumo Logic announced an integration that will allow developers to view analytical data about their CircleCI jobs within the Sumo Logic dashboard.

It will now track and show data such as the number of failed builds over a time period, average run time, and status of jobs within a project, along with the ability to send custom logs from a container.

“The act of building quality software, and shipping it quickly, has become the core engine of value creation in companies across all industries. CircleCI allows teams to rapidly release code they trust by automating the build, test, and delivery process. Together with Sumo Logic, we’re now able to provide customers unprecedented visibility and control over their software development pipelines to better monitor and secure their DevOps pipeline to ensure quality and increase delivery velocity,” said Kunal Jain, senior product manager for CircleCI.

]]>Google Developer Days: Dart 2.5 comes packed with new developer featureshttps://sdtimes.com/goog/google-developer-days-dart-2-5-comes-packed-with-new-developer-features/
Wed, 11 Sep 2019 20:06:48 +0000https://sdtimes.com/?p=36962Google Developer Days kicked off in China this week with new features and updates for the developer community. The stable release of Dark 2.5 SDK was announced with technical previews of major developer features. The technical previews included ML Compete, a machine learning-powered code completion capability, and a foreign function interface for calling C code … continue reading

]]>Google Developer Days kicked off in China this week with new features and updates for the developer community. The stable release of Dark 2.5 SDK was announced with technical previews of major developer features.

The technical previews included ML Compete, a machine learning-powered code completion capability, and a foreign function interface for calling C code within Dart.

According to the team, code completions will help developers avoid misspellings and explore APIs. “As APIs grow, exploration becomes difficult, as the list of possible completions gets too long to browse through alphabetically. We’ve been working hard over the past year to apply machine learning to the problem,” the team wrote in a blog post.

Since ML Complete is still only in preview, it will not have the performance or polish expected in later builds, the team explained.

The dart:ffi foreign function provides better support for calling C code and enables developers to invoke C-based frameworks and components. The function is also in preview, and has some limitations such as no support for nested structures, inlines arrays and packed data.

Dart 2.5 also features improved support for constant expressions.

In addition, the company announced Flutter 1.9. Flutter is an open-source mobile development framework, and version 1.9 is the company’s biggest update to date, the company explained. A major milestone in this release includes “successful integration of Flutter’s web support into the main Flutter repository,” the team explained in post, “allowing developers to write for mobile, desktop and web with the same codebase.

Other features of the release include support for macOS Catalina and iOS 13, improved tooling support, new Dart language features and new Material widgets.

]]>Clubhouse launches real-time collaborative editorhttps://sdtimes.com/softwaredev/clubhouse-launches-real-time-collaborative-editor/
Wed, 11 Sep 2019 19:00:33 +0000https://sdtimes.com/?p=36953Project management platform company Clubhouse announced the private beta launch of Clubhouse Write, a real-time collaborative knowledge base tool that focuses on information discovery. The full version is planned to be available later this year. RELATED CONTENT: For development teams, it’s time to throw out the open-office plan Making project management easier… for developers “We … continue reading

]]>Project management platform company Clubhouse announced the private beta launch of Clubhouse Write, a real-time collaborative knowledge base tool that focuses on information discovery. The full version is planned to be available later this year.

“We are committed to creating an enjoyable experience for engineering and product teams to plan and build software together. Write empowers modern software teams to focus on planning while not being burdened by the complexity involved with managing and connecting their workflow across multiple tools,” said Kurt Schrader, co-founder and CEO of Clubhouse.

Write interacts with Clubhouse’s product management platform, allowing users to collaborate and comment on a doc in real-time, create retros, strategy docs, agendas and more with teams in one place.

The software is competing with commonly utilized project platforms such as Wrike, Asana and Jira, which uses its Kanban system to optimize for Agile teams.

Clubhouse also uses Kanban boards in addition to features called epics and milestones that show how everyday tasks of a team contribute towards a larger company goal. In addition, it uses iterations to prioritize and keep track of the work a team needs to be focused on, in a specific timeframe.

The company explained it prioritizes ease-of-use a major improvement to project management, and that the platform is 10 times as fast to load as other tools. Some of the key features in the new Write platform include:

Connect Stories, Epics, Iterations and more to Docs: Users can reference their work in the Clubhouse project management tool and keep everyone organized and on the same page

]]>From COBOL to Go: Why we must support legacy security training and beyondhttps://sdtimes.com/softwaredev/from-cobol-to-go-why-we-must-support-legacy-security-training-and-beyond/
Wed, 11 Sep 2019 18:12:22 +0000https://sdtimes.com/?p=36949It seems almost comical that in 2019, we should be talking about working with a computer language that was invented in 1959. There aren’t too many seminars or conventions these days devoted to the art of rethreading classic Singer sewing machines, or swapping out the oil pan on a Chevrolet Parkwood or a Triumph Herald. … continue reading

]]>It seems almost comical that in 2019, we should be talking about working with a computer language that was invented in 1959. There aren’t too many seminars or conventions these days devoted to the art of rethreading classic Singer sewing machines, or swapping out the oil pan on a Chevrolet Parkwood or a Triumph Herald. Most of those aging tools have long since been retired, upgraded to new and more efficient models. Yet over here in technology land, which is supposed to be cutting-edge compared to other industries, we are still working with languages like COBOL, which was released around the same time.

Of course, there are very good reasons for this. The Common Business Oriented Language (COBOL) may be 60 years old, but it was so well constructed that it’s still relevant and in widespread use today. COBOL was created as a relatively simple way, using plain language grouped into specific sentences and syntax, to program back-end systems to perform mathematical and formulaic tasks. Why does it live on today? Put simply, it is very good at its job. In a sense, it has become a part of the computing fabric for many mainframe and core systems in industries as diverse as the financial sector and manufacturing.

There have been incremental updates to COBOL over the years, most notably in 2002 when it was turned into an object-oriented language to make programming new applications a little bit more fluid. But for the most part, COBOL remains today what it was back then: an unsung hero, and a workhorse kind of programming language that works on the back-end to underpin many modern mainframe-level applications.

Unfortunately, there was not much in the way of security considerations when COBOL was first created. For example, many COBOL applications have a password program protecting them, but they are almost never hardened against things like brute-force protection to prevent cracking. Couple this with the fact that many modern security tools that monitor network traffic don’t know how to deal with or evaluate functions happening within programs written in business languages like COBOL, and you have a real problem waiting to happen. Quite a few modern breaches have been successful because of a lack of security oversight for systems running classic computer languages. In 2015, the data of over four million US federal employees was exposed when the Office of Personnel Management (OPM) was hacked, with the blame falling to their usage of COBOL, citing an inability to implement modern security measures on such an archaic system.

Years ago, security was provided by an army of programmers who knew COBOL and other hot languages of the time. Back in the 1960s, COBOL was like today’s Java or .Net, and those who knew about it were the rockstars of their departments. As of 2019, those folks have likely long since retired, even though the systems they protected have not.

Quite a few of these so-called greybeards were brought back to their organizations as contractors to defend the same mainframes they worked on before. In more than a few places, they existed as a bit of an anomaly: a secretive cabal of aging sorcerers in some back corner of the office, their strange dress (wide ties and three-piece-suits) and oddly polite mannerisms not quite fitting in with all the modern hipsters sporting skinny jeans and man buns. Yet, they were absolutely necessary, because few modern programmers sling code in COBOL and other ancient languages. Sadly, even these final wizard sentinels are fading away, finally giving up the ghost and moving to Boca Raton, and enjoying a true retirement.

As such, there is a dire need for people who understand older languages, and the security vulnerabilities that they contain. Even if younger people don’t know how to write code in classic languages, they should at least understand how they work and their potential vulnerabilities. Because while COBOL development has remained relatively static, the threats leveled against networks have continued to evolve. Trying to use ancient cybersecurity techniques programmed sixty years ago, like the aforementioned COBOL password application, to defend a mainframe against modern attackers is akin to deploying a phalanx of spearmen to fight a platoon of space marines – short of a Hollywood-esque miracle, it’s going to end badly for the dudes with the spears.

That is why we believe in the importance of an advanced training system that covers a wide gamut of programming languages and frameworks. You see, one of the glaring issues with a lot of security training options is that the information is simply too generic, or worse – completely irrelevant in the day-to-day jobs of the developer partaking in it. Spending half a day learning about vulnerabilities that only apply to Java isn’t going to help a COBOL developer fortify their system, and it just perpetuates the idea of ‘security’ as a tick-the-box exercise to be forgotten about once the mandatory course has been completed. I might add that training someone in Java security bugs is not always applicable for a Java Spring developer. Secure coding is simply different in every language, even up to the framework level.

In our mission to empower all developers to become security superheroes, we won’t overlook a valid computer language that is still in use at some of the world’s most targeted and critical facilities. Exploring our platform, you will find modern, hands-on challenges and training relating to COBOL offered alongside some of the most modern programming tools available today, like Google’s Golang. This flexibility ensures that training is relevant to an individual and contextual, mimicking their work environment for maximum engagement and effectiveness. After all, building a robust security culture is paramount in the fight against cyber threats, so training should be practical (and fun, of course!).

We want our industry to get to the stage where it doesn’t matter if security threats are made against systems running aging languages, or against the most modern mobility apps. We want every developer to be armed with the best information about those vulnerabilities, the tools and techniques used by attackers to exploit them and how to stop them cold. We will never surrender or waiver in the face of cybersecurity threats.

PS: Think an ancient language escapes susceptibility to SQL injection? Think again. See if you can locate and fix one in COBOL right now.

]]>Apple announces iOS 13 app requirementshttps://sdtimes.com/softwaredev/apple-announces-ios-13-app-requirements/
Wed, 11 Sep 2019 18:00:22 +0000https://sdtimes.com/?p=36946Apple announced that all apps will need to be updated or built to work with iOS 13 by April 2020. In addition, apps will be required to fit the all-screen designs of Apple’s largest mobile devices and iPads. “Customers around the world will soon experience the incredible new features of iOS 13. Make sure your … continue reading

]]>Apple announced that all apps will need to be updated or built to work with iOS 13 by April 2020. In addition, apps will be required to fit the all-screen designs of Apple’s largest mobile devices and iPads.

“Customers around the world will soon experience the incredible new features of iOS 13. Make sure your apps are faster, more responsive, and more engaging by taking advantage of Dark Mode and advances in ARKit 3, Core ML 3, and Siri. Update your apps and product pages, and submit today,” the company wrote.

Apple suggested building the apps using the XCode 11 GM seed rather than XCode 11, because the latter had experience lookup failures when the app ran on iOS 11 or earlier. The new XCode GM seed includes includes SDKs for iOS 13, iPadOS, watchOS, tvOS 12 and macOS Catalina.

Before submitting apps for review, Apple explained developers should make sure they are making the most of the product pages with the app’s name, icon, description, screenshots, previews and keywords.

“You can also take this opportunity to update your subtitle and promotional text, and choose to promote any new in-app purchases. If your app supports Dark Mode, consider including at least one screenshot that showcases what the experience looks like for users,” Apple wrote.

The company also encouraged developers to utilize the latest advancements in ARKit 3, Core ML 3 and Siri when building the apps, and to use TestFlight to test apps on a device before submitting them for review.

]]>SD Times news digest: Amazon’s quantum ledger database, InfluxData’s serverless time series PaaS and ArcBlock joins Erlang Ecosystem Foundationhttps://sdtimes.com/softwaredev/sd-times-news-digest-amazons-quantum-ledger-database-influxdatas-serverless-time-series-paas-and-arcblock-joins-erlang-ecosystem-foundation/
Wed, 11 Sep 2019 15:32:47 +0000https://sdtimes.com/?p=36943Amazon announced its new quantum ledger database, a fully managed service that provides a cryptographically verifiable ledger for applications that need a centralized, trusted authority to provide a verifiable record of transactions. Amazon QLDB uses an immutable transactional log, known as a journal, which tracks each and every application data change and maintains a complete … continue reading

]]>Amazon announced its new quantum ledger database, a fully managed service that provides a cryptographically verifiable ledger for applications that need a centralized, trusted authority to provide a verifiable record of transactions.

Amazon QLDB uses an immutable transactional log, known as a journal, which tracks each and every application data change and maintains a complete and verifiable history of changes over time. All transactions must comply with atomicity, consistency, isolation, and durability (ACID) to be logged in the journal, and cannot be deleted or modified, according to Amazon in apost.

“We are excited to see customers streamline their operations and enhance their customer and partner experiences by using Amazon QLDB to do things like keep track of credit and debit transactions across customer bank accounts and reconcile data between supply chain systems to track the complete manufacturing history of a product,” said Shawn Bice, VP of databases for Amazon Web Services.

InfluxData announces serverless time services platform as a serviceInfluxDB announced the launch of InfluxDB Cloud 2.0, a serverless times series platform that collects, stores, queries, processes and visualizes raw, high-precision, time-stamped data.

It supports a wide range of customer applications such as SLA-related monitoring of metrics for e-commerce sites, real-time monitoring of wind turbines and click-stream analysis of users to help improve the customer experience.

“InfluxDB Cloud 2.0 provides a cost-disruptive, highly customizable time series platform that allows developers and operators the flexibility they need to scale their applications while keeping our brand promise of ‘time to awesome,’” said Evan Kaplan, CEO of InfluxData. “Just point your collection agent at Influx and start working with your data.”

ArcBlock joins the Erlang Ecosystem FoundationBlockchain platform ArcBlock announced that it became a founding sponsor of the Erlang Ecosystem Foundation, a non-profit organization dedicated to furthering Erlang, Elixir, LFE, and other technologies based on the BEAM.

ArcBlock built its blockchain platform to provide features such as OTP and Smart Contracts on BEAM, and to be highly-accessible for building, running and using decentralized applications and custom blockchains in the cloud.

“By sponsoring the foundation, we are expanding our support and belief that building our blockchain platform using Erlang and the supporting ecosystem including BEAM is a perfect match,” said Tyr Chen, VP of engineering at ArcBlock.

Security Compass redesigned SD Elements for continuous complianceSoftware security company Security Compass announced that it redesigned the policy-to-execution platform, SD Elements to support a state of continuous compliance.

According to the company, SD Elements version 5.0 is intended for Agile development teams to manage the security of their both the software itself, as well as the deployment and configuration requirements of the server and operating system. In addition, the release integrates with issue tracking tools such as JIRA and Microsoft Azure DevOps along with CI tools such as Jenkins.