In total I’ve written five methods for sandboxing code. These are certainly not the only methods but they’re mostly simple to use, and they’re what I’ve personally used.

A large part of this sandboxing was only possible because I built the code to work this way. I split everything into privileged and unprivileged groups, and I determined my attack surface. By moving the sandboxing after the privileged code and before the attack surface I minimized risk of exploitation. Considering security before you write any code will make a very big difference.

One caveat here is that SyslogParse can no longer write files. What if, after creating rules for iptables and apparmor, I want to write them to files? It seems like I have to undo all of my sandboxing. But I don’t – there is a simple way to do this. All I need is to have SyslogParse spawned by another privileged process, and have that process get the output from SyslogParse, validate it, and then write that to a file.

One benefit of this “broker” process architecture is that you can actually move all of the privileged code out of SyslogParse. You can launch it in another user, in a chroot environment, and pass it a file descriptor or buffer from the privileged parent.

The downside is that the parent must remain root the entire time, and flaws in the parent could lead to it being exploited – attacks like this should be difficult as the broker could would be very small.

Hopefully others can read these articles and apply it to their own programs. If you build a program with what I’ve written in mind it’s very easy to write sandboxed software, especially with a broker architecture. You’ll make an attacker miserable if you can make use of all of this – their only real course of action is to attack the kernel, and thanks to seccomp you’ve made that a pain too.

Before you write your next project, think about how you can lock it down before you start writing code.

If you have anything to add to what I’ve written – suggestions, corrections, random thoughts – I’d be happy to read comments about it and update the articles.

This is the fifth installment on a series of various sandboxing techniques that I’ve used in my own code to restrict an applications capabilities. You can find a shorter overview of these techniques here. This article will be discussing sandboxing with Apparmor.

Mandatory Access Control:

Mandatory Access Control (MAC), like Discretionary Access Control (DAC), is meant to define permissions for a program. Users and Groups are DAC. But what if you want to confine a program with full root? As discussed, root with full capabilities is quite dangerous – and in the case of SyslogParser quite a few of those capabilities are necessary.

Apparmor is a form of Mandatory Access Control implemented through the Linux Security Module hooks in the Linux kernel. MAC is “administrator” defined policy, and can confine even root applications.

Apparmor is a *bit* out of scope for this series, as it doesn’t actually involve any code, but it’s still relevant.

The Code:

While Apparmor itself doesn’t have code in SyslogParse, here’s the profile for the program.

Apparmor is incredibly straight forward. There is a path, and then there is one or more letters. These letters stand for certain things.

r = read
m = map
w = write

All of this is pretty straight forward. SyslogParse gets the number of CPU cores from /sys/devices/system/cpu/online , so it needs “r” access.

It needs to read some libraries in order to function.

And that’s it. Sort of… apparmor on my system is, unfortunately, quite broken. The tools for enforcing/ complaining crash on me (I have a lot of weird profiles that I experiment with), which is actually why I started building SyslogParse. So this profile is a bit incomplete. It still needs some capabilities defined for chroot, setuid/setgid, and possibly more file access.

Conclusion:

When enabled an Apparmor profile will begin enforcing policy as soon as the process begins. That means that, even if running as root, an attacker is always confined to those files defined in the profile. Apparmor is quite powerful, and combined with the other sandboxing techniques used it’s a very nice reinforcement – writing to the chroot, for example, is denied throughout the process by both DAC and MAC now.

This is the fourth installment on a series of various sandboxing techniques that I’ve used in my own code to restrict an applications capabilities. You can find a shorter overview of these techniques here. This article will be discussing sandboxing a program using Limited Users.

Users and Groups:

Linux Discretionary Access Control works by separating and grouping applications into ‘users’ and ‘groups’. A process in user A is, in terms of DAC, isolated from a process in user B.

There’s also user 0, the root user, which is a privileged user account.

Only a program with root, or with CAP_SETUID / CAP_SETGID can manipulate its own UID/GID. In the case of SyslogParse, we have root, and we definitely want to lose it when we can.

So, after getting the file handles we need, here’s the code for dropping to a limited user account (if you’ve read the previous articles this happens right after the chroot).

setgid(65534) sets the GID to 65534. This is the “nobody” group on my system. Nobody is an unprivileged user often used by programs wanting to drop privileges. If 65534 doesn’t exist, all the better – dropping to a GID that doesn’t exist is great.

if (setuid(65534) != 0)
err(0, "setgid failed.");

setuid(65534) is changing the user to 65534, which, as above, is the nobody user. Same as before, if the user doesn’t exist, that’s dandy.

Conclusion:

Dropping privileges is a hugely beneficial thing to do. By separating the code into a “privileged stuff done all at once, then never again” you can drop privileges before doing anything dangerous, and there goes an attackers ability to escalate.

Dropping root privileges is incredibly important. The attack surface and amount of post-exploitation work an attacker can do shrinks drastically.

In the case of SyslogParse, as any attacks would be for local escalation (it does no networking), an attacker would probably lose privileges by exploiting it if going from any normal compromised process. At this stage they are in a chroot with no read or write access, running in an unprivileged user/ group with no capabilities, they have access to 22 system calls and some very nice to have calls, such as read() are denied, and their only chance for getting a few capabilities is by exploiting a few lines of code that involve opening a file.

I was going to have the next section be on rlimit, but it’s really not important and also not viable unless you’ve built the application from the bottom up to never write to a file, which will typically involve a broker’d architecture.

This is the third installment on a series of various sandboxing techniques that I’ve used in my own code to restrict an applications capabilities. You can find a shorter overview of these techniques here. This article will be discussing Chroot sandboxing.

Intro To Chroot:

If you’ve been on Linux for a little while you may have already heard of a chroot. Maybe some service you use “chroots” itself. You may have also heard that chroot’ing isn’t great for security, or maybe even that they’re super easy to break out of. Whoever said that isn’t wrong, chroot environments can be great for confining some things and really awful for confining others.

Chroot is, simply, “change root”. The Linux file system has a root, it’s “/” – everything is an offset from this root node. But with chroot you can tell a process that “/” is actually somewhere else. Now, as far as that process knows, the entire filesystem begins somewhere else.

There are two requirements for a process to be able to break out of a chroot environment:

1) The ability to call chroot() again (this requires root, or CAP_SYS_CHROOT)

2) The ability to write to the chroot environment.

So, as soon as you chroot, your process should drop privileges and lose the CAP_SYS_CHROOT capability. If you can remove write access, all the better.

In the case of SyslogParse I did both.

The Code:
mkdir("/tmp/syslogparse/", 400);

chdir("/tmp/syslogparse/");

if(chroot("/tmp/syslogparse/") != 0)
err(0, "chroot failed");

Line by line:

mkdir("/tmp/syslogparse/", 400);

mkdir() is a system call that makes a directory at the specified path, with the specified permissions.

In this case the code is creating a directory /tmp/syslogparse/ and only root can read the folder, and no one can write to it.

chdir("/tmp/syslogparse/");

We move our working directory to the folder we’ve just created.

if(chroot("/tmp/syslogparse/") != 0)
err(0, "chroot failed");

chroot changes the root directory to the folder we’ve just created. Now the program, as far as it knows, is at “/” – the root directory. It only sees an empty file system, none of which it can write to.

The next step here is to drop permissions with setUID() and setGID(), which I’ll be going over in the next article.

Conclusion:

At this point the chroot *can* be broken out of. Nothing is stopping this program from simply changing the mode of the folder to allow writing to it. If, however, you drop privileges (again, see next article), you’ll be in a chroot that can not be bypassed by any design flaws in the chroot itself.

The benefits of being in a no-write chroot are quite nice. The process can’t write any files, which means it can’t open any pipes to other processes – no communication.

With a Grsecurity kernel (some distros package these particular chroot modifications) there’s a host of other restrictions applied, chroot sort of acts as a separate namespace/ user, and communicating outside of the chroot in any new way is denied. The process is isolated more strictly.

It’s a very nice way to sandbox an application, and it’s fairly simple, though not suitable for all applications.

This is the second installment on a series of various sandboxing techniques that I’ve used in my own code to restrict an applications capabilities. You can find a shorter overview of these techniques here. This article will be discussing Linux Capabilities.

Intro To Linux Capabilities:

On Linux you’re likely familiar with the root user. Root is the ‘admin’ account of the system, it has privileges that other processes don’t. But, what you may not have known, is that those privileges given to root are actually enumerable and defined. For example, root has the capability CAP_SYS_CHROOT, which is what allows it to call chroot().

Let’s say a program needs root, but only because it calls chroot at some point. Instead of giving all of the privileges except CAP_SYS_CHROOT.

So, if your program has to run as root (as mine does), you can actually drop some of your root privileges while maintaining others. How effective is this? Jump down to the conclusion below to see – hint: it can go between great and awful.

This line is where we state which capabilities we’d like. After the capng_clear() we have none, but the program does need a few.

The first two parameters are effectively saying to add these rules.

The third, fourth, and fifth parameters are the defined capabilities to allow.

The last parameter is a -1, which lets capng know that your list of capabilities is terminated.

capng_apply(CAPNG_SELECT_BOTH);

And now, with this call, the rules are applied. Only these capabilities are given to the program… “only”.

Conclusion:
This was really easy to do. Three lines of code and a large number of capabilities are gone. But, what’s left?

CAP_SETUID/ CAP_SETGID : Quite dangerous, as it means you can interact with processes of other UID/GID’s by simply making your UID/GID the same as theirs.

CAP_SYS_CHROOT : Not as scary, you can chroot, and if you retain the ability to chroot you can then break out of that.

CAP_DAC_READ_SEARCH : You can read all files that root can read. Password files, sensitive files, whatever. All yours to read.

So in a lot of ways you’re just dropping from root…. to root. You’re still quite powerful and dangerous if you drop these capabilities, it’s not a very large barrier. An attacker who gains the above privileges still gains quite a lot. But, in the case of SyslogParse, all capabilities are dropped eventually.

The nice thing about capabilities is you can do it as soon as the program starts. After you’ve gotten your file descriptors and all that, you can go ahead and start real sandboxing, then do the actual dangerous stuff. In this case, I had to give a lot of scary permissions. But for someone else, maybe all they need is to bind port 80, and in that case you just give CAP_NET_BIND_SERVICE, drop everything else, and that’s pretty nice.

It honestly feels like a “Well, it’s better than giving it full root” in this case, which is bittersweet. It still pretty much feels like full root. But uh, hey, it’s better than giving it full root.

COG-605 question bank that works!
Preparing for COG-605 books can exist a intricate job and 9 out of ten chances are that youll fail if you finish it with nonexistent commandeer guidance. Thats in which satisfactory COG-605 engage is available in! It provides you with efficient and groovy data that no longer simplest enhances your practise but additionally gives you a cleanly carve threat of passing your COG-605 download and stirring into any university without any melancholy. I prepared through this awesome software and I scored 42 marks out of 50. I can assure you that its going to never assist you to down!

a passage to do together for COG-605 examination?
The exercise exam is incredible, I passed COG-605 paper with a marks of one hundred percentage. nicely worth the cost. I may exist back for my subsequent certification. initially permit me provide you with a huge thanks for giving me prep dumps for COG-605 exam. It was indeed useful for the coaching of tests and additionally clearing it. You wont believe that i got no longer a unmarried solution incorrect !!!Such comprehensive exam preparatory material are top class passage to attain high in test.

I want modern and updated dumps of COG-605 examination.
killexams.com presents dependable IT exam stuff, i absorb been the utilize of them for years. This exam is no exception: I passed COG-605 the utilize of killexams.com questions/answers and exam simulator. everything human beings teach is right: the questions are actual, this is a completely trustworthy braindump, definitely valid. And i absorb most efficient heard suitable matters about their customer service, however in my sentiment I by no means had issues that could lead me to palpate them inside the first vicinity. simply high-quality.

All actual test questions ultra-modern COG-605 examination! Are you kidding?
The rehearse exam is tremendous, I handed COG-605 paper with a score of 100 percent. Well well worth the cost. I may exist returned for my next certification. First of total permit me provide you with a expansive thanks for giving me prep dumps for COG-605 exam. It become certainly helpful for the preparation of test and too clearing it. You wont disagree with that i were given no longer a unmarried solution incorrect !!!Such comprehensive exam preparatory material are grotesque manner to score extreme in checks.

proper location to rep COG-605 actual buy a seek at question paper.
Terrific stuff for COG-605 exam which has actually helped me pass. i absorb been dreaming about the COG-605 profession for a while, but might too want to by no means accomplish time to study and actually rep licensed. As a whole lot as i was tired of books and publications, I couldnt accomplish time and simply test. The ones COG-605 made exam training definitely realistic. I even managed to test in my vehicle while the utilize of to work. The handy layout, and yes, the sorting out engine is as top because the net web page claims it is and the accurate COG-605 questions absorb helped me rep my dream certification.

i discovered a first rate source for COG-605 dumps
Though I even absorb sufficient legacy and revel in in IT, I expected the COG-605 exam to exist simpler. killexams.com has stored my time and money, with out those QAs I might absorb failed the COG-605 exam. I got burdened for few questions, so I nearly had to guess, however this is my fault. I requisite to absorb memorized rightly and pay attention the questions better. Its loyal to realize that I passed the COG-605 exam.

real COG-605 questions and redress answers! It warrant the charge.
My view of the COG-605 test fee usher changed into horrific as I normally wanted to absorb the schooling thru a test approach in a category margin and for that I joined precise schooling however those total appeared a fake ingredient for me and that i cease them perquisite away. Then I did the hunt and in the discontinuance modified my considering the COG-605 check samples and that i commenced with the equal from killexams. It surely gave me the fine scores in the exam and im blissful to absorb that.

examination questions are modified, wherein am i able to find current questions and answers?
I passed both the COG-605 first try itself with 80% and 73% resp. Thanks a lot for your help. The question bank really helped. I am thankful to killexams.com for helping a lot with so many papers with solutions to toil on if not understood. They were extremely useful. Thankyou.

Do you want latest dumps of COG-605 examination, it's far perquisite vicinity?
Being an under average pupil, I had been given frightened of the COG-605 exam as topics seemed very difficult to me. Butpassing the test become a requisite as I had to trade the undertaking badly. Searched for an effortless usher and got one with the dumps. It helped me solution total multiple nature questions in 2 hundred minutes and skip efficiently. What an exquisitequery & solutions, thoughts dumps! Satisfied to rep hold of two gives from well-known teams with good-looking bundle. I recommend most efficient killexams.com

I had no time to seek at COG-605 books and training!
Great!, I arrogant to exist trained with your COG-605 QA and software. Your software helped me a lot in preparing my IBM exams.

IBM IBM Cognos 10 Controller

IBM has introduced the launch of the latest version of its company intelligence application, IBM Cognos 10.

The newest replace, which IBM says is the most colossal due to the fact that it bought Cognos, goals to buy analytics to cellular gadgets and to interject a social networking approach to analytics, in order to encourage better collaboration.

Cognos 10 has a brand current materialize and consider, which IBM says mirrors individuals's daily utilize of technology, and too comprise loyal time analytics, and the capacity to deliver analytics to cellular instruments akin to iPhone and BlackBerry handsets.

The utility additionally extends the reporting of records, to existing analytics in a simpler to champion in humor format, and to accomplish analytics purchasable to the broader organisation, increasing the variety of stakeholders that may utilize commerce intelligence in the resolution making technique.

IBM’s cloud computing method is clear-cut to a degree with Cognos 10, the newest liberate of the company intelligence platform.

As a fragment of Cognos 10, IBM is delivering a cloud-based mostly provider they designation Cognos 10 within the Cloud.

It’s a hosted service for businesses that requisite to switch its license to the cloud or entirely host online.

The method gives some insight into the IBM software neighborhood’s approach to cloud computing.

We espy three distinctions to the Cognos strategy and how it displays on IBM’s method.

Hosted, no longer SaaS

The carrier has the identical elements that incorporates Cognos 10, arguably the spotlight of IBM’s suggestions on demand conference. It features predictive analytics and social collaboration capabilities via its integration with Lotus Connections.

Cognos 10 within the Cloud isn't within the vein of a traditonal SaaS service. You pay a license permeate for internet hosting your license. purchasers might too switch the license from on-premise to the IBM Cloud. consumers can too also opt to buy a hosted license with out investing in the on-premise service.

Streaming records from the Cloud

Cognos 10 is a hefty duty analytics engine. SPSS is baked in. It pulls in statistics from the cloud, including unstructured data. clients can utilize third-birthday party data to rep a archaic evaluation, espy a real-time trek or finish “what if,” evaluation.

less About Cloud, more About birth

It’s questionable if there is demand for a pure cloud computing provider. companies absorb abysmal investments in data facilities however they are seeking the passage to extend to a cloud atmosphere. That’s a fashion they espy plenty from SAP, VMware, Microsoft and a number of others.

IBM says it’s method with Cognos 10 is much less about the cloud than start. customers covet a spot to position their software license. That’s the birth IBM is making with Cognos 10 in the Cloud.

we've questions in regards to the IBM strategy. it’s just a wee a fragment of IBM’s cloud providing nonetheless it’s illustrative of a bigger approach to accomplish utilize of the cloud as an alternative start mannequin for typical software offerings.

firms want closing mile facts analytics on their economic operations. After performing procurement of substances, creation of accomplished items, inventory management, supply chain administration, creating income orders, deliveries, transportation administration, superior shipping notifications, delivery, dealing with devices administration, invoice creation, ultimately the company has to function the fiscal shut taking the refined information from the disparate records sources of the aforementioned enterprise processes. In an commerce aid Planning device, total the above company processes are unified in a lone database with a structured layout. youngsters, organizations conducting the company transactions in assorted database techniques and spreadsheets will requisite to expend a few hours to assemble the records to consolidate the customary ledger postings, money circulation, and funds stream (Srilu, 2013).

The records that comes into excel spreadsheets may additionally now not absorb validation system to examine the enter of the values apart from checking numeric, alphanumeric mixtures. There isn't any ordinary pre-defined configuration installation in excel Spreadsheet migrated from another device. this may create problems for a corporation to consolidate and migrate the records from excel spreadsheets into monetary statements for every month conclusion. excel spreadsheets can drastically behind down the enterprise multiply and the revenues of the commercial enterprise. If the commerce can't visualize the profits and regional breakdown of the key efficiency warning signs, they are not able to forecast the multiply blueprint for the subsequent quarter or subsequent month. this can influence supply chain operations, superior planning optimization, and corporation network collaboration, organisation community planning. in the conclusion, the organization will undergo losses. Jabil changed into stricken by these fundamental problems to shut the finance of the corporations on time (Thomson, 2012).

Jabil implemented IBM Cognos Analytics, the commandeer arm of IBM commerce Analytics company Intelligence Analytics. The retort from Cognos Analytics presents the financial enterprise teams to create the analytics stories on their own. they can access the statistics points from BI Analytics through diverse sources, and with gauge drag and drop innovations they can construct the self-serviceable reports. enterprise teams can access IBM Cognos even when they are cellular or operating browser-primarily based rig on the web. Streamlined workflow gives quite a few notifications for the service provider groups to motion their specific initiatives (IBM, n.d.).

commercial enterprise Key performance warning signs

IBM commercial enterprise efficiency management offers the insights to measure the efficiency of the commercial enterprise in entirety by means of bridging the gaps discovered between the supply chain administration operations and the economic closing of the company books. It provides alternate options to build superior planning for the corporation to incorporate ideas to assignment the trends of the facts into the longer term by passage of gazing the ancient patterns of the information in every enterprise group. This gives forecasting for supply chain operations to manufacture the products into the market for numerous segments and valued clientele. corporations that may observe the premonitory watch signals notifications and traits can set the tempo for enterprise boom for each quarter to exhibit the records insights into revenues. EPM too can panoply screen the project plans developed with selected economic budgets for a bevy of a portfolio of mission options spanning a number of items to set up a knowledge-pushed company. The scorecards of EPM additionally indicates the areas the site the felony and regulatory necessities for the agencies in making ready the fiscal statements (IBM, n.d.).

Definitive Analytics

IBM Prescriptive Analytics performs prognosis on the agency ordinary efficiency with strategic management strategies. The diagnosis ends up in offering counseled alternate options to explain, buy a seek at, report, and act. commerce workflow optimization drives automation of some choices. C-stage executives can manage the business-oriented decisions. These choices can probably approach up on reviewing the brand loyalty of the consumers regarding specific items for either launching a current product or multiply an present product in the organization. The prescriptive evaluation can too tap into the localization of the events equipped by the company catering a bevy of local markets (IBM, n.d.).

Presaging Analytics

IBM commerce Intelligence offers predictive analytics for performing ultimate mile data mining analytics through wrangling the records with exploratory options to buy note the commerce perspectives from a variety of key performance indicators of commerce methods spawning from buying the uncooked substances via fiscal shut. organisations want insights to devise as a minimum 18 months forward of their superior planning optimization for supply chain administration processes. This may too exist executed through several forecasting fashions and utilized statistical methods on the statistics and glean the statistics via textual content-primarily based analytics to create trend-based fashions for the future. This helps to forecast and forecast the variety of products to launch on the commercial shelves for each retailer available in the market with who Jabil is conducting the enterprise transactions. basically, Jabil manufactures digital components to bring together a number of digital contraptions, telecommunication, and community gadget. This requires granularity of the market tendencies and the predictive analytics too can hook up to connect to R language for leveraging any extra statistical programs. IBM’s Apache Spark can function the analytics in-memory in distinction to Apache Hadoop that puts IBM ahead of the pack on file-primarily based techniques. Integration of Apache Spark with IBM predictive analytics and R can raise the rig extra (IBM, n.d.).

Governance, Compliance, and risk administration

corporations performing groups throughout the globe requisite to meet regulatory compliance requirements when buying and selling with different international locations. The dangers can approach up from the tax legal guidelines or customs tasks levied by passage of other nations throughout the globe. To steadiness the change hazards, the company transactions carried out within the commerce aid planning system or through multiple database methods should exist audited. IBM has the analytics options for managing the chance. The risk administration analytics will scan through the database of the corporation and discover the dangers involved via actual-time analytics for understanding the ever-expanding hazards within the transactions (IBM, n.d.).

notwithstanding, Jabil carried out not total the above options. youngsters, a majority of the IBM Analytics options absorb been implemented. The benefits derived through Jabil via IBM Analytics solutions offered astounding insights to obtain the operational excellence for fiscal nearby and manufacturing vegetation to race their company correctly. The solution can too exist utilized to a few different industries that are within the manufacturing of semiconductor, high-tech, telecommunications, storage instruments, and community equipment. The manufacturing technique comprises a particular method model to do together the built-in design of the circuits and chips. The grasp information management constructed via Jabil via their analytics, and commerce performance administration to panoply screen the principal thing performance indications can exist utilized to a considerable number of different industries which are under an identical class. The success of Jabil can exist a potential chance for relaxation of the industries in that angle to observe in line with the necessities evaluation and implementation of Jabil.

While it is very arduous assignment to pick trustworthy certification questions / answers resources with respect to review, reputation and validity because people rep ripoff due to choosing wrong service. Killexams.com accomplish it sure to serve its clients best to its resources with respect to exam dumps update and validity. Most of other's ripoff report complaint clients approach to us for the brain dumps and pass their exams happily and easily. They never compromise on their review, reputation and quality because killexams review, killexams reputation and killexams client self-possession is principal to us. Specially they buy supervision of killexams.com review, killexams.com reputation, killexams.com ripoff report complaint, killexams.com trust, killexams.com validity, killexams.com report and killexams.com scam. If you espy any fake report posted by their competitors with the designation killexams ripoff report complaint internet, killexams.com ripoff report, killexams.com scam, killexams.com complaint or something fancy this, just champion in humor that there are always atrocious people damaging reputation of helpful services due to their benefits. There are thousands of satisfied customers that pass their exams using killexams.com brain dumps, killexams PDF questions, killexams rehearse questions, killexams exam simulator. Visit Killexams.com, their sample questions and sample brain dumps, their exam simulator and you will definitely know that killexams.com is the best brain dumps site.

Looking for COG-605 exam dumps that works in actual exam?killexams.com exam prep material gives total of you that you absorb to pass COG-605 exam. Their IBM COG-605 dumps mediate of questions that are precisely identical as actual exam. high gauge and impetus for the COG-605 Exam. They at killexams guarantees your accomplishment in COG-605 exam with their braindumps.

The only issue that's in any manner very principal perquisite here is passing the COG-605 - IBM Cognos 10 Controller Developer test. total that you requisite will exist a high score of IBM COG-605 exam. The simply a widowed facet you wish to try to is downloading braindumps of COG-605 exam confine humor directs currently. they are not letting you down as they already guaranteed the success. The professionals likewise champion step with the most up and returning test with the intent to waive the additional an area of updated dumps. One twelvemonth slack rep perquisite of entry to possess the aptitude to them via the date of purchase. every one could benifit expense of the COG-605 exam dumps through killexams.com at an occasional value. often there will exist a markdown for each cadaver all.
Are you looking for IBM COG-605 Dumps of actual questions for the IBM Cognos 10 Controller Developer test prep? they proffer most updated and nice COG-605 Dumps. Detail is at http://killexams.com/pass4sure/exam-detail/COG-605. they absorb got compiled an information of COG-605 Dumps from actual tests thus on allow you to organize and pass COG-605 exam on the first attempt. simply memorize their and relax. you will pass the test.
killexams.com Discount Coupons and Promo Codes are as under;
WC2017 : 60% Discount Coupon for total exams on website
PROF17 : 10% Discount Coupon for Orders additional than $69
DEAL17 : 15% Discount Coupon for Orders larger than $99
SEPSPECIAL : 10% Special Discount Coupon for total Orders

The most gauge approach to rep achievement in the IBM COG-605 exam is that you should accomplish solid introductory materials. They ensure that killexams.com is the greatest direct pathway closer to Implementing IBM IBM Cognos 10 Controller Developer authentication. You can exist successful with replete self conviction. You can espy free inquiries at killexams.com sooner than you buy the COG-605 exam items. Their mimicked evaluations are in two or three conclusion fancy the genuine exam design. The inquiries and answers made by the guaranteed specialists. They proffer you with the esteem of taking the genuine exam. 100% guarantee to walkaway through the COG-605 actual test.

killexams.com IBM Certification exam courses are setup by method for IT masters. Bunches of understudies absorb been grumbling that excessively numerous inquiries in such a considerable measure of activity tests and exam courses, and they're simply exhausted to discover the cash for any more prominent. Seeing killexams.com experts instructional course this entire configuration in the meantime as in any case ensure that every one the data is incorporated after profound research and assessment. Everything is to accomplish accommodation for competitors on their street to certification.

We absorb Tested and Approved COG-605 Exams. killexams.com gives the most perquisite and most recent IT exam materials which almost accommodate total data references. With the usher of their COG-605 brain dumps, you don't requisite to squander your opening on examining greater fragment of reference books and basically requisite to burn through 10-20 hours to ace their COG-605 actual issues and replies. Furthermore, they appoint you with PDF Version and Software Version exam inquiries and answers. For Software Version materials, Its introduced to give the candidates recreate the IBM COG-605 exam in a genuine domain.

We proffer free supplant. Inside legitimacy length, if COG-605 brain dumps that you absorb bought updated, they will illuminate you with the usher of email to down load best in class model of . if you don't pass your IBM IBM Cognos 10 Controller Developer exam, They will give you replete refund. You requisite to transport the verified imitation of your COG-605 exam record card to us. Subsequent to affirming, they will quick give you replete REFUND.

On the off chance that you set up together for the IBM COG-605 exam the utilization of their experimenting with engine. It is effortless to prevail for total certifications in the principal endeavor. You don't must accommodate to total dumps or any free downpour/rapidshare total stuff. They proffer free demo of each IT Certification Dumps. You can try out the interface, question decent and ease of utilize of their activity appraisals before settling on a election to purchase.

Instead of just focusing on the major features of IBM Cognos 10 Report Studio, Roger Johnson, coauthor of IBM Cognos 10 Report Studio: Practical Examples, looks at some lesser-known tools and kick properties to further enhance your Report Studio projects.
From the author of 

The biggest additions to IBM Cognos Report Studio in version 10 absorb been vigorous Reports and Statistics objects. These two current ways of presenting data provide many current options for report authors to create reports that match the analytical needs of the user community. Entire classes absorb been created to focus on the creation of reports using these formats.

Other capabilities in IBM Cognos 10 Report Studio comprise the integration of external data sources and the aptitude to save Report Studio reports to exist used directly by commerce Insight Advanced authors. Using external data sources in Report Studio has been a customer request since IBM Cognos tools moved to a web-based architecture. This enhancement gives the report consumer current opportunities to process just the information needed for a specific situation. By using a common report definition between applications, users who aren't accustomed to complicated reports can receive assistance to build reports that absorb features beyond those normally available to commerce Insight Advanced users.

While the features I've discussed to this point are promoted as key features, other useful options in version 10 will multiply the effectiveness of reports to your users. Some of these features were previously available through complicated programming by the report developer, but IBM Cognos commerce Intelligence now makes this job much easier. As a report writer and instructor who can esteem the aptitude to create report designs designed to inform a story, I want to highlight 10 of these current features.

Alternative Text

The Alternative Text property has been added to graphical objects, such as images and charts, to allow screen readers to translate the text (see pattern 1). Another aspect of this feature that's applied in various properties is to provide localized text without using conditional formatting.

Colored Regions, Plot area Fill, Material Effects

The Colored Regions, Plot area Fill, and Material Effects properties allow charts to absorb more creative designs in order to enhance the overall presentation of the charts. Mixed with other chart presentation options, these features alleviate report developers to give current polish to the presentation of content. pattern 2 shows the additional gradient options that are now available, along with the current properties for the charts.

New Gradient Fill Types

Moving beyond simple linear gradients, some current fill types provide more choices for enhanced presentations. Backgrounds behind pages, objects, and selected areas can now panoply a number of gradient designs. The fill types comprise rectangular frames and embedded circular gradients, with many parameters to customize the blending of colors (see pattern 3).

IBM Cognos 8 commerce Insight had the aptitude to set Y1 and Y2 axes. Now combination charts can exist stacked with two more axes for better analysis of related numbers. These options can exist integrated into dashboard design to enhance the presentation of related measures. pattern 4 shows a chart with the additional axes selected.

In IBM Cognos 8 Report Studio, different objects were created for each of the dimensional functions. Now they're total bundled into the Query Calculations (see pattern 5). This option simplifies the toolbox for report authors, while emphasizing that dimensional queries can absorb enhanced calculations.

Summarize wee Slices and Exploded Slices

With pie charts, two improvements comprise summarizing wee slices and exploding slices. These two options greatly help the presentation of pie charts to emphasize the most principal information (see pattern 6). In my classes, these options absorb been requested for years, and now they're delivered in this release.

Trendlines, Category Baselines, and Numeric Baselines

Report Studio functions absorb now improved on the predictive aptitude behind charts, without having to utilize external functions to cipher the numbers (see pattern 7).

Series Color Synchronization

With the changes made to enhance series, this feature allows report writers to simplify the legends when the sequence information is repeated across the combinations. By changing the sequence Color property to Match (see the short arrow in pattern 8), the legend shows the different product lines and the nested measures differently (indicated by the longer arrow), increasing the effectiveness of the presentation.

Custom Properties for Prompts

Several current properties absorb been added to enhance the prompt options. These changes allow report developers to create customized prompt options without the additional JavaScript programming required in earlier versions. Most of the prebuilt text labels with these prompts can exist customized to meet the needs of individual reports (see pattern 9).

Dimensional Set Definitions

As a welcome addition to dimensional functions, the Set Definition option (see pattern 10) allows report authors to create complicated subsets that are presented graphically. This feature can accomplish troubleshooting complicated sets much easier by providing a more modular approach to set development. This was the best current property for me, since it simplifies the evolution of sets that had to exist replete of nested functions.

Amazon Web Services, already a expansive player in databases, is launching a current commerce intelligence tool to alleviate non-technical people accomplish sense of total that data.

QuickSight, now in preview, is a “very mercurial cloud-powered commerce intelligence service,” according to Amazon (AMZN) senior vice president Andy Jassy, speaking at AWS Re:invent in Las Vegas.

QuickSight promises to enable commerce users to peer inside the data residing in various AWS repositories, including RedShift data warehouse, Elastic MapReduce, and Amazon relational database services. There they can espy relationships between data, the company said.

Then a tool called Autograph can alleviate them build an intelligence profile about total that data, said Matt Wood, general manager of product strategy for AWS.

This is fragment of Amazon’s thrust to accomplish services apposite to commerce users as well as the developers who absorb driven the public cloud’s success so far.

Oh, and QuickSight will exist 1/10th the cost of traditional BI tools, which went unnamed although Jassy famed a tool with a designation that rhymes with “Hognos,” a reference to IBM’s (IBM) Cognos BI tool but it’s not arduous to espy QuickSight competing down the road with tools fancy Tableau as well, Miller said.

This narrative will exist updated during the Re:Invent keynote.

For more on Amazon Web Services check out the video:

Subscribe to Data Sheet, Fortune’s daily newsletter on the commerce of technology.

This narrative was updated with additional analyst comment and to redress the spelling of QuickSight.

Artificial intelligence is transforming every commerce process. Developers are incorporating AI – in the configuration of abysmal learning, machine learning and kindred technologies – into cloud-native applications and commerce processes through tools that enable them to compose these features as data-driven microservices.

On Thursday, IBM Corp. and SiliconANGLE’s sister market research solid Wikibon held a #Think2019 conference community CrowdChat to debate how enterprises can accomplish the journey to AI in the cloud. The hourlong online session was well-attended and there was vibrant discussion of many issues related to the journey to AI.

Here were the most noteworthy responses from these and other participants to each of the CrowdChat questions:

Q: Rob Thomas, general manager of IBM Analytics, has said there is no AI without an IA or information architecture. How are you modernizing your data estate — the organization of your data assets — to rep ready for an AI and multicloud world?

Katie Schafer: “If anyone is looking to learn more about ICP for Data, exist sure to check-out session #2571, titled: Change the Game: Learn How to Win with AI happening on Wednesday, February 13th at 1:30pmPST in the great Theater on the Data & AI Campus.”

Hemanth Manda: “Having talked to a number of customers and commerce partners, this is an issue everyone is grappling with and they are addressing it through their current platform offering ICP for Data, an integrated data and AI platform for multi-cloud”

Matthias Funke: “I espy this question approach up everywhere. Modernization to gain agility, current insights faster, and absorb more people and commerce application capitalize from it … very often one needs to start at the bottom of the AI ladder and the question: How can I collect total the data I need, and accomplish it accessible to the perquisite people, at the perquisite time? And how can I integrate data assets across different locations and data sources?”

Carlo Appugliese: “We toil with clients on their Data Science Journey and biggest factor to winning with AI is to accomplish sure you account for 3 things … The perquisite skills, the perquisite process/ culture and finally the redress tools.”

Jason Tavoularis: “Of course! AI requires data. if there’s no infrastructure, there can’t exist much data, so you can’t hope the AI to exist very smart.”

Anantha Narasimhan: “Our customers are looking at AI to alleviate drive digital and potentially commerce transformation. At the core of AI are a) People & Culture, b) Process, c) Data … With data present total across the organization, getting a helpful ply on it is the very first step … Collect, Organize and then dissect data. and then Infuse AI models in order to operationalize … ML is a considerable enabler for AI. They requisite to recall that AI can alleviate us win quickly.. or tumble flat quickly. Because if the data is not of helpful quality, the models will hurl up atrocious insights”

Tanmay Sinha: “Quality of AI models is directly proportional to the quality of data used to train the model. Without an information architecture to serve high-quality data, the AI models can exist inconsistent, extraneous or worse biased.”

John Furrier: “I mediate that he’s really nailing the core AI (and ML) angle meta data or information that feeds AI engines is super important. If companies rep this perquisite then ML and AI soar to current heights”

Jameskobielus: “There’s no practical AI without data quality, governance, prep, and training in a high-performance data lake. Modernizing your data estate in the multicloud for AI demands an industrialized DevOps approach that automates much/most of these processes … AI can’t exist smart if data scientists can find the perquisite data to drive feature engineering etc. Likewise, AI models can’t finish their jobs with high self-possession without upfront and ongoing training from fresh operational data … Infusing AI into the commerce requires that an operationalized data science pipeline with a strong real-time/streaming CI/CD workflow.”

David Floyer: “IMO, the future for analytics is real-time results. This means mercurial execution of operational AI/Analytics near the data. It too means low-latency connections between applications wanting to automate processes and the AI/analytics required … For example, if you are wanting to ensure that only employees are entering enterprise premises, there will exist many enterprises with the identical problem, and many solutions to purchase … 1. There are two sources of AI solutions – internal, and external, the timehonored accomplish or buy decision. For products and services owned, it is vital that data is collected about those services in IA. However, there are many technologies it would exist easier to buy.”

Q: How are you increasing workload and consumption flexibility in your analytics systems?

Katie Schafer: “To learn more about how you can build a proper data architecture to help data accessibility, don’t miss The Road to AI—A Journey to Modernize Your Data Architecture session on Wednesday, February 13th at 3:30pmPST on the Data & AI Campus.”

Carlo Appugliese: “In Data Science … The key to success is replete access to total data.. In my experience, this is feverish topic and there is a equilibrium they absorb to play between Security and Innovation….My sustain is replete access to data for your Data Scientist and Data Engineers is censorious to your commerce innovating….Here is sessions at #IBMThink where Experian will Go into detail about their AI journey. https://myibm.ibm.com/events/think/all-sessions/session/6869A …. Here is a blog where I explained a recent AI project working with Experian. https://www.ibmbigdatahub.com/blog/how-data-science-elite-helped-uncover-gold-mine-experian”

Matthias Funke: “This is gold to me. Having a catalog of data assets at my disposal without worrying about where data resides. Avoid or minimize data movement to avoid lag and cost is of tremendous value.”

Madhu Kochar: “As I talk to multiple clients, access to data especially shadowy data is critical. It is too principal that they absorb helpful data virtualization story, acceptation you finish not always to trek your data…. Capability to associate your traditional data with IOT data, actual time streaming data is censorious to drive current analytics insights”

Jennifer Shin: “the notion of being able to access total data sounds fancy a dream I had once… then I woke up and remembered I toil with people and data is mess. The reality is they can absorb total the data in the world, but it’s useless if it’s not accurate or of poverty-stricken quality…. In a competitive market, there will always exist businesses and both internal and external clients who want their data to exist kept private if it provides an advantage. Being able to access the data I requisite when I requisite is more principal than having access to total of it.”

David Floyer: “It is essential to absorb multiple sources of data around key commerce processes, products and services. The quality of AI/Advanced Analytics will exist dictated by the quality of the data sources.”

Q: Does your analytics strategy assume to trek data to analytics or analytics to data? Why?

Anantha Narasimhan: “Definitely analytics coming to data – so faster conclusion can exist taken at source or nearby to it…btw, there is an exciting session on Data Modernization strategy in a Multicloud World – by Madhu Kochar: https://myibm.ibm.com/events/think/all-sessions/session/7235A and virtualization: https://myibm.ibm.com/events/think/all-sessions/session/7223A”

Madhu Kochar: “Data Gravity rules! You bring analytics to data, that is the most optimal…. Especially the world of multi-cloud strategy this is censorious that they champion data where it is, thus technology fancy data virtualization, having governance built in to faith the data drives to trusted AI”

Matthias Funke: “Analytics to data. Any data movement or copying is expensive and leads to total kinds of issues (lineage, quality, latency, higher resource utilization and cost)”

Hemanth Manda: “always trek Analytics to Data .. that’s been their mantra . Data gravity should prescribe your strategy. stirring against the gravity means you would discontinuance up spending a ton of resources / money & is not sustainable”

Carlo Appugliese: “in my opinion, finish your analytics where the data is if you can.. There is no value in stirring lots of data, but there is significant commerce value in doing more analytics with your data. Its total about rate and pace of AI projects.”

Tanmay Sinha: “Data is growing exponentially within an enterprise. stirring becomes an avoidable expense if you can bring analytics to your data!”

Jennifer Shin: “In my experience, companies already collecting data find value in turning data into analytics, whereas companies developing current products or services find more value in using analytics for data. The best #datascience teams needs to find the equilibrium in doing both”

Jameskobielus: “In-situ/in-database analytics is a key foundation fo the expansive data revolution. Data gravity. Now with the edge looming larger as a data source, analytics is stirring closer to those nodes and getting more sophisticated there. Distributed AI.”

David Floyer: “Data in volume is costly to trek & takes a lot of time. Data loses value over time. so, it is usually much cheaper to trek code to data than data to code. This is especially loyal for operational AI/analytics, which should exist moved nearby to data source where possible.…It is enchanting to observe that when AI systems are deployed, 90%+ of the code is in operational AI, rather than ML model development.”

Carlo Appugliese: “One of the biggest I’ve seen is that companies mediate they are behind vs other companies.. What companies requisite to understand is that its a journey and they just requisite to start. most companies are learning and growing in this space…..I recommend, pick the one utilize case, do wee team on the project and start. If it fails, that is normal. just goto next one and for the wins, it will cancel out many failed projects.”

Hemanth Manda: “very puny to exist honest & I mediate this is huge issue given increased and diverse regulations , GDPR being the latest… Hemanth Manda…Here is a session on Data Virtualization @ mediate 2019 that would exist very valuable to attend : https://myibm.ibm.com/events/think/all-sessions/session/7181A “

Matthias Funke: “How principal are helpful policies if their ratification is not automated? abysmal integration across the analytics ‘stack’ can resolve for that”

Madhu Kochar: “Every CDO would want to teach YES to this. requisite ML/AI based solutions to automate these activities, and they in IBM analytics absorb solutions to accomplish this effortless (a arduous problem)”

Jennifer Shin: “there’s always a policy, but the restrictions depend on the purpose associated with how the data is being used. When my #datascience team built models for negotiation purposes, even their internal status reports listed their toil as confidential.”

John Furrier: “Policy driven will exist a very principal portion of a machine driven future. Getting policy down and having machines pattern out current policies on the flee address both on demand AI and actual time AI”

Sarbjeet Johal: “ML and AI are next frontiers in Data Governance Platforms and these models will toil in conjunction with policies! So it’s “policy driven ML enabled” approach which seems most practical with the tools they absorb today!”

Q: Are protection and compliance regimes built into your analytics systems, or bolted on? Why?

Matthias Funke: “I espy it as a never-ending journey. One is never done. There is a legacy to start with, but every moment, current data (sources) may rep added to your current landscape. Fun!”

Tanmay Sinha: “Data privacy regulations are coming whether they fancy them or not. GDPR is already here, CCPA is coming soon. Enterprises, wee and large, absorb to starting thinking about the data being collected and shared.”

Jennifer Shin: “#analytics systems typically absorb several layers of protection and compliance regimes. accessing the platform is at a system even whereas anonymizing data depends on the data set (as well any contracts associated with it)”

David Floyer: “Early days for establishing compliance and protection policies. It will probably requisite a company to absorb a Wall Street Journal disaster to focus minds on this issue!”

Q: How does your organization administer profiling, cleansing and cataloging of data?

Anantha Narasimhan: “this is perhaps the core of organization’s journey to AI or even to a successful Data Lake, Data Science…. there is an excellent session at THINK, hosted by Jay @jaylimburn -https://myibm.ibm.com/events/think/all-sessions/session/6913A …. some organizations refer to this as Data Preparation or Data Curation…. Here’s a helpful session at THINK, in case you are interested: https://myibm.ibm.com/events/think/all-sessions/session/6912A”

Carlo Appugliese: “In area of Data Science, typically they comprise a Data Engineer who toil side by side with Data Scientist and are censorious to buy findings and do into Catalog as well as provide key features needed to modeling phase…. You requisite a combination of a cross frictional team, the perquisite access to data and tools to build your AI foundation…. One the expansive areas they espy in AI is aptitude to complicated what your predictive models are doing and finish you faith them.. Let me interrogate everyone, finish you faith the conclusion made by an AI/ML model?…Model jaundice is something they are very focused on, especially from a dev ops perspective. Understanding this is principal and censorious to your organizations future as you incorporate key decisions using AI. So faith AI but verify :)”

Sarbjeet Johal: “it’s mainly done at LOB even in most of the companies I absorb worked with in advisory capacity. Central tools, policies and procedures requisite to exist built for data governance. I believe the WHAT of data cleansing and cataloging must sojourn with LOB and HOW with IT.”

Hemanth Manda: “as usual, there are multiple solutions too ply this, but ICP for Data is a platform that includes and enforces these capabilities by default .. Learn more @ this mediate session : https://myibm.ibm.com/events/think/all-sessions/session/5478A….here is a 3rd party listing of vendors offering cleansing tools : https://www.analyticsindiamag.com/10-best-data-cleaning-tools-get-data/“

Pouya Fakhari: “An edge computing approach is made for the concept of the data warehouse, while pure cloud computing fundamentally contradicts the concept. It is generally accepted that only edge computing makes sense for systems that collect data on a massive scale thoughts hybrid cloud edge…. E. g. an Edge Computing Device can outsource simple computing tasks to a cloud using a Function-As-A-Service concept. Here, the cloud does not store anything and no backend is set up on it. The cloud only offers computing power for any functions that are transmitted on the fly

Matthias Funke: “Would disagree if you mediate about IoT utilize cases with massive volumes of data points continuously produced. Aggregation and storage can befall at the edge. It’s not just data warehousing though.”

Jennifer Shin: “I absorb yet to espy a organization that has this process streamlined. Most established companies absorb many, many meetings about how data set is going to exist used internally and the logistics around it…. one of the advantages of structure cutting edge tech and creating current data products/services is that this is dealt with further down the line”

David Floyer: “This an principal requirement in the maturing of AI/advanced analytics. Solutions should champion distributed and multi-cloud data, and ideally champion orchestration and optimization of stirring code to data or vice versa.”

John Furrier: “Clean data in —> considerable ML and AI; not cleanly data in –> lots of cleanup. Just teach no to data pollution!!”

Carlo Appugliese: “If you’re looking to build a current Data Science Team?…Here is a blog I do out on how to build a rock star Data Science Team! https://www.ibm.com/blogs/business-analytics/rock-star-ibm-data-science-elite-team/ “

Jennifer Shin: “In my experience, IT and operations teams are very principal when you requisite to substantiate that certain governance is in site within an #analytics system or requisite a current policy to exist do in place… the best resource for deploying models anywhere, securely is a IT or technology team that is knowledgable, experienced and responsive!”

James Kobielus: “The core platform that enables enterprises to deploy models anywhere is a data-science CI/CD toolchain that can serve to any target device, node, hardware, container, and runtime environment. The “securely” requires taut access and integrity controls throughout.”

David Floyer: “End-to-end security from development, deployment, and updating is important, and not yet at total common!”

Q: How are your analytics users using data visualization and low-code evolution tooling?

Katie Schafer: “Here’s a considerable session that will showcase the current capabilities in IBM Cognos Analytics 11.1 and how it uses AI to provides smarter self-service analytics: https://myibm.ibm.com/events/think/all-sessions/session/3651A “

Hemanth Manda: “I tried using Tableau, but gave up after a few days. Nothing beats Cognos especially after the latest improvements in 11.1”

James Kobielus: “Increasingly, analytics developers are using declarative, visual, low-code tooling to program AI/ML, with the tooling leveraging auto-ML to compile models for optimized execution on target platforms…. Analytics commerce users are too using self-service, visual tooling to build predictive and other advanced analytics for conclusion support–eg Cognos…. ML-driven augmented programming, leveraging low-code visual front-ends, is a huge research focus here at Wikibon. espy my report from a year ago: https://wikibon.com/augmented-programming-ml-development/ ”

Jennifer Shin: “I find more teams are using #datavisualization across an organization ranging from creating a realtime dashboard for the c suite to using it as a a tracking tool for day to dat operations.”

Q: What is your organization doing to manage and mitigate jaundice in your models?

Katie Schafer: Here’s a session happening at mediate 2019 that will dive into Detecting and Mitigating jaundice in AI: https://myibm.ibm.com/events/think/all-sessions/session/3449A”

Carlo Appugliese: “What I’ve seen is companies are doing this manually but after the fact and really tumble short.. This is topic needs to exist evaluated in the dawn of your model development. They can really alleviate companies with this using tools.”

Madhu Kochar: “Bias in AI a very feverish topic and critical. There are considerable examples, i will share later on how many societal biases are in their datasets. So they really requisite tools and technology to alleviate on data traceability, explainability”

Jennifer Shin: “All models will absorb jaundice because they live in a world without consummate information, which is why being able to communicate the extent that the jaundice poses a risk is so essential in #AI….The best passage to manage and mitigate jaundice in your model is to understand #statistics, #mathematics, #data, #science, #engineering and people…. algorithms aren’t in and of themselves bias, but it can multiply the jaundice depending on how it is designed… Developing commandeer reporting and monitoring for models and algorithms implemented in productions is essential for limiting bias”

Steve Ardire: “Most people mediate algorithms are objective but in great fragment they’re opinions embedded in code. AI systems are black boxes; the data goes in and the retort comes out without an explanation for the decision. Algorithms that learn are conjectural to become more accurate unless jaundice intrudes and amplifies stereotypes….Current ML models understand what’s explicitly stated, but less helpful at anticipating what’s not said or implied…@DameWendyDBE University of Southampton, Growing role of #AI in their lives is ‘too principal to leave to men’ …Must develop efficient mechanisms in algorithms to filter out biases and build ethics into AI with aptitude to read between the lines or what requires common sense.”

James Kobielus: “Debiasing models starts with debiasing data. Here’s a piece I published on the emerging best practices in this. From eventual year: https://www.informationweek.com/big-data/ai-machine-learning/debiasing-our-statistical-algorithms-down-to-their-roots/a/d-id/1331852

David Floyer: “This is an principal faith issue! If a company is shown not exist absorb addressed this issue, there are ascetic risk of brand damage. E.g., a store with cameras with AI to alleviate employees meet, greet or challenge customers entering the store should exist especially careful!”

Sarbjeet Johal: “always exist training your models! Context injection mechanisms are poverty-stricken with current toolings but they are sensible of this problem, that means, they are on their passage to resolve it!…. you absorb to remove jaundice from data input! Algos aren’t bias, data is! Always champion that in mind!”

Here’s the replete transcript of the CrowdChat and the polls. And save this date for the Journey to Cloud CrowdChat, 9 a.m. PST Jan. 30, at https://www.crowdchat.net/think2019.

Image: Marcus Spiske/Unsplash

Limited mode

https://myibm.ibm.com/events/think/all-sessions/session/6869A….

×

Since you’re here …

… We’d fancy to inform you about their mission and how you can alleviate us fulfill it. SiliconANGLE Media Inc.’s commerce model is based on the intrinsic value of the content, not advertising. Unlike many online publications, they don’t absorb a paywall or race banner advertising, because they want to champion their journalism open, without influence or the requisite to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from their Silicon Valley studio and globe-trotting video teams at theCUBE — buy a lot of arduous work, time and money. Keeping the quality high requires the champion of sponsors who are aligned with their vision of ad-free journalism content.

If you fancy the reporting, video interviews and other ad-free content here, please buy a moment to check out a sample of the video content supported by their sponsors, tweet your support, and champion coming back to SiliconANGLE.

This is the first installment on a series of various sandboxing techniques that I’ve used in my own code to restrict an applications capabilities. You can find a shorter overview of these techniques here. This article will be discussing seccomp filters.

What is Seccomp? An Introduction:

System calls are your way of asking the kernel to do something for you. You send a message saying “Hey, open a file for me” and it’ll probably do it for you, barring permission errors or some other issue. But, if you can talk to the kernel, you can exploit the kernel. Many vulnerabilities are found in kernel system calls, leading to full root privileges – bypassing sandboxing techniques like SELinux, Apparmor, namespaces, chroots, you name it. So, how do we deal with this without patching the kernel, as a developer? Seccomp filters.

Seccomp is a way for a program to register a set of rules with the kernel. These rules deal with the system calls a program can make, and which parameters it can send with them.

When you create your rules you get a nice overview of your kernel attack surface. Those calls are the ways your attacker can attack the kernel. On top of that ,you’ve just reduced kernel attack surface – if an attacker requires system call A and you’ve only allowed system calls B through D, they can’t attack with system call A.

Another nice benefit is the ability to restrict capabilities. If your program never writes a file, don’t give it access to the write() system call. Now you’ve reduced the kernel attack surface, but you’ve also stopped the program from writing files.

The Code:

Seccomp code is fairly simple to use, though I haven’t found any really good documentation. Here is the seccomp code used in my program, SyslogParse, to restrict its system calls.

This should be fairly simple to understand if you’ve written basically any code. This instantiates the seccomp filter, “ctx”, and then initializes it to kill on rule violations. Simple.

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(futex), 0);

This line is a rule for the “futex” system call. The first parameter, “ctx”, is our instantiated filter. “SCMP_ACT_ALLOW”, the second parameter, is saying to allow when a condition is met. The third is a macro for the futex() system call, as that’s the call we want to allow through the filter. The last parameter, “0”, is how many rules we want to add to this that deal with parameters.

Simple. So this rule will allow any futex system call regardless of parameters.

I chose futex in this example to demonstrate that seccomp can not protect you from every attack. Despite the heavy amount of sandboxing I’ve done in this program, this filter will do nothing to stop attacks that use the futex system call. Recently, one vulnerability was found that could do just that – a call to futex would lead to control over the kernel. Seccomp just isn’t all powerful, but it’s a big improvement.

Note: I found all of these syscalls by repeatedly running strace on SyslogParse with different parameters. Strace will list all of the system calls as well as their arguments and makes creating rules very easy.

seccomp_load(ctx) will load up the filter and from this point on it is enforced. In this case I’ve wrapped it to ensure that it either loads properly or the program won’t run.

And that’s it. That’s all the code it takes. If the program makes a call to any other system call it crashes with “Bad System Call”.

Seccomp is quite easy to use and is the first thing I’d make use of if you are considering sandboxing. All sandboxing relies on a strong keernel, but as a developer you can only change your own program, and seccomp is a good way to reduce kernel attack surface and make all other sandboxes more effective.

Linux has something like 200 system calls (can’t find a good source, anyone know a more definitive number?), and SyslogParse has dropped that down to about 22. That’s a nice drop in privileges and attack surface.

I wrote a program recently, SyslogParse, to display apparmor and iptables rules based on violations found in my system log. I did this because my apparmor-utils packages always break / were quite slow when going through my profiles, and going through iptables rules in syslog was a big of a pain too.

I decided this would be a fun project to sort of “lock down” against theoretical attacks, and I’d like to blog that experience to demonstrate how to use these different sandboxing mechanisms, as well as how they make the program more secure.

What takes place below is after the process of designing the application from a functional point of view – “what do I need this thing to do?”.

Step One: Threat Modeling

This step was a little less important for SyslogParse, as I was going to secure it regardless of real-world threats, but I’ll explain how I went about threat modeling.

The first thing I did is figure out what permissions this SyslogParse needs. I know the application, by design, must read from /var/log/syslog – a file that you need root permissions to read from. So I’ll be running this as root in order to do work.

To make things easier for users who don’t log to syslog, I’ll take in a path parameter, which means someone running this program can specify an arbitrary input file. That is the attack surface – one file being taken in.

An attacker who can control content in that file can potentially escalate to root privileges.

Step Two: Seccomp Mode 2 Filters

I’ve discussed seccomp filters on my blog beforehand, but to give a short recap, seccomp filters are developer-defined rules that will dictate which system calls can be made, and do light validation on the parameters.

Seccomp filters are very simple to use, and they’re the first thing I implemented.

Here I declare the seccomp filter.

scmp_filter_ctx ctx;

Here I initialize it to kill the process when rules are violated.

ctx = seccomp_init(SCMP_ACT_KILL);

And here is an example of a seccomp rule being created.

seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(futex), 0);

In the above rule I’ve said to allow the futex system call, a call used when a program uses threads and has to set mutexes. The “0” means I have no additional verification of the system calls arguments. In an ideal world I’d validate arguments to all of these calls, but it’s not always possible.

In the end I had about 22 calls, 3 of which I validate parameters on.

The thing about seccomp is that there’s no point doing sandboxing before I set this up, because without it the kernel will always be an easy target, and as many sandboxes as I layer on, I can’t change that from within my SyslogParse code – until I use Seccomp.

23 calls is quite a lot (though considerably less than it could be), and I chose futex as an example to show that despite limiting the calls, the recent futex requeue exploit would bypass this seccomp sandboxing and all other sandboxing this program uses. There’s only so much we can do from within the context of this program.

What is nice, however, is that I now know my kernel’s attack surface. Barring flaws in the seccomp enforcement, I know how my attacker can interact with the kernel, and that in itself is quite valuable.

Step Two: Chroot

By design SyslogParse must be root, in order to read root files, so that means I’ve got the chroot capability. May as well make use of it.

There’s a misconception that chroots are really poor security boundaries. This isn’t entirely false, but it’s not the whole story.

With one call I can set up a chroot environment that’s not so easy to break out of, at least it won’t be by the end of this article.

mkdir("/tmp/syslogparse/", 400);

That creates a folder /tmp/syslogparse/ with the permissions that only root can read from it. Right now we’re root, so we can read from it, but that won’t last too much longer (about two more steps).

chroot("/tmp/syslogparse/");

The file system as SyslogParse now knows it is an empty wasteland that only root can read and no one else can write to. A regular user would have no ability to read or write to it, which is nice because Inter-Process Communication (IPC) would require at least write access, and ideally read and write access.

Step Three: RLimit

For SyslogParse this is a bit unnecessary, but I went with it anyways.

rlrmit() is a system call you can make that will irreversibly limit that process in some way. In this case, because I want to limit IPC, and because SyslogParse only ever writes to stdout, which is already open, I’m going to tell it that it can not write to any new files.

struct rlimit rlp;
rlp.rlim_cur = 0;
setrlimit(RLIMIT_FSIZE, &rlp);

In a more literal way, I’ve told the system that my process can not write to a file that is larger than 0 bytes.

Step Four: Dropping Privileges

The last significant step in this sandbox is to lose root. In this case, dropping to user 65534, which, at least on my system, is the ‘nobody’ user. A more ideal situation would have SyslogParse drop to a completely nonexistent user (to avoid sharing a user with another process) but I’m going with this for now.

setgid(65534);
setuid(65534);

That’s all it takes – SyslogParse is now running as the nobody user/group. No more root, and the process is within a chroot environment that it has no permissions to read or write to.

Step Five: Apparmor

I’m on elementary OS, which has apparmor. So, in my makefile I’ve put an ‘mv’ command that puts my profile into the users apparmor directory.

A few library files, read access to /var/log/ (for arbitrary log files), and, because I threaded the process, it needs to read

/sys/devices/system/cpu/online.

The real benefit of this apparmor profile is that it takes effect before any code runs – the rest of the sandboxing all happens right after I open /var/log/syslog – there is very little code before it, but some, and a compromise at that point will lead to full root control of the process. With the apparmor profile the worst case scenario is that they have access to only what is listed there.

Conclusion

Overall, I think that’s a fairly robust sandbox. It was mostly for fun, but it was all fairly simple to implement.

If an attacker did break into this system, the above would make things a bit annoying, though the obvious path is to simply attack one of the allowed system calls, as I only validate parameters on 3 and there is clearly attack surface still left.

This isn’t bullet proof, and it’s not an excuse to not test your code. I fed SyslogParse garbage files/ unexpected input to make sure it failed gracefully/ err’d out immediately when it came across something it didn’t know how to deal with.

Lots of fun to write, and hopefully others can make use of this to make their programs a little bit stronger.

A long time ago I posted an article entitled Windows XP – Abandon Ship. That was nearly one year ago today. And just a few days ago XP officially stopped getting support and patches from Microsoft.

I’d like to clear up some misconceptions that people still seem to have.

You can not be secure on Windows XP. In truth, it’s been a lost cause for quite some time, but Microsoft has been pretty good at dealing with threats through an active approach. Shatter attacks devastate XP machines due to poor privilege separation, but Microsoft addressed this issue decently with a few patches and by lowering service permissions.

Patches are not coming anymore. Support is gone. Do not expect the next big attack to be swiftly put down.

But what does that mean for you, XP user?

It could mean nothing – attackers may not care. We’ve never had such a widely used piece of software go out of support, so many people are still on XP. As far as I know this is unprecedented. Predictions are meaningless – I can not tell you what attackers will do, only what they can do.

So, as always, if you’re using XP or any unsecured system you will be playing a game of chance and not skill. It becomes ‘any attacker who wants to’ as opposed to ‘any attacker who can’ when it comes to getting into your system.

Is that a system you want to rely on?

I’ll also take this time to say that no one should be extending support for XP. Notably, Google Chrome will be continuing to patch XP. To me this is nothing but a false sense of security. Google Chrome relies heavily on its sandbox to protect its users, but any sandbox on Windows is going to rely entirely on a secure operating system. So the sandbox is very clearly not a huge barrier because the unpatched XP kernel and services will be easily leveraged for a full sandbox escape.

No one should be encouraged to use XP now. Take no pride in it- you’re gambling, that’s it.

“But I run EMET! You said EMET is great!”

EMET is awesome. And largely useless to an attacker on XP – while it’s a cute way to push back patch time on systems by a little bit it is by no means a significant barrier when basic memory corruption mitigations are not even supported on the operating system.

“But I run NoScript”

I love NoScript – great piece of software. But what will you do when a kernel vulnerability in text parsing is being used in the wild? You’ll get infected.

I really have very little to say here. XP is not securable. It wasn’t a year ago but it really more than ever is not.

I’m not saying you’ll get infected. I’m not saying that every XP machine will be linked to a botnet in a year. I’m saying that you are not secure, and anyone who wants to take advantage of that will not have a hard time.

So for one of my classes I had to perform a full penetration test on a server. It wasn’t particularly difficult but I figured I’d share the report here. I’ve done this twice now for the same class (different setups) and it’s been pretty fun.

This is all purposefully vulnerable stuff. It was a script kiddy stuff to get in but fun nonetheless. The report is written as if it had been handled by a legitimate team of pentesters.