ISLAMABAD: After making widespread publicity for Prime Minister Imran Khan’s ambitious Naya Pakistan Housing Programme (NPHP), the government on Monday revealed that applicants will have to bear 20 per cent of the total cost of their dream house as down payment.

“Every person has to pay 20pc of the cost of the house. The remaining 80pc will be undertaken by the government,” said ruling Pakistan Tehreek-i-Insaf (PTI) leader Syed Firdous Shamim Naqvi at a joint press conference with Task Force on Housing chairman Zaigham Rizvi and Punjab Minister for Housing, Urban Development and Public Health Engineering Mehmoodur Rashid.

It is for the first time the government revealed that applicants will have to pay 20pc of the total cost as down payment. No such announcement had been made by any government quarter since the programme was unveiled by the prime minister soon after taking office.

Task force official says remaining 80 per cent will be undertaken by government

“If the cost of the lowest category house/apartment is Rs3 million, the applicant will have to pay Rs600,000 as down payment,” said Mr Naqvi.

A senior official of the housing ministry, who did not want to be named, told Dawn that 20pc cost of the house would be shared by the applicants.The remaining will be paid by the banks.

Mr Naqvi said that under the NPHP, 5m houses would be constructed — which means a million houses every year.

“According to a survey, 300,000 to 350,000 houses are being built every year in the country and we have to increase that number to 1m,” he said.

Mr Naqvi said the NPHP was based on the mortgage housing system, under which the owner of the house would return the entire cost in 20 years. He said the government had prepared a comprehensive strategy for accomplishing the goal, adding that the government would provide people their own shelter, helping them get rid of rental expenses.

He said that under the programme, new housing authorities would be established in Punjab, Khyber Pakhtun­khwa and Gilgit-Baltistan. “We know that housing is a provincial subject, but the Centre and the provincial governments will work together for the success of the programme.”

Answering a question about the return of mortgage money, he said housing finance was considered the safest form of loans all over the world.

Zaigham Rizvi said former prime ministers Nawaz Sharif and Yousuf Raza Gilani had also announced housing schemes during their governments, but their level of commitment could be assessed by the fact that they did not convene even a single meeting on the issue.

“On the other hand, Prime Minister Imran Khan has presided over 10 meetings on the housing programme in 60 days,” he added.

Responding to a question, he said that the media should not spread disappointing ideas that the 5m housing programme cannot become a successful endeavour, adding that India had already launched such a plan and it was going well. He said not only in urban centres but houses would also be constructed in rural areas.

In reply to a question, he said the government would also give some relaxation in taxes to the applicants so that they could pay the instalment of their houses more easily.

Speaking on the occasion, provincial minister Mehmoodur Rashid said that in the first phase in Punjab, the project would be implemented in Sialkot, Faisalabad, Lodhran, Chiniot, Baha­walna­gar and Jhelum districts as well as Muzaffarabad in Azad Jammu and Kashmir. “Prime Minister Imran Khan will inaugurate the first phase of the programme on Jan 1, 2019,” he added.

He said land was available for the project and private developers were taking a lot of interest in the scheme. Mr Rashid said the financial model of the project was currently being prepared.

The National Accountability Court (NAB) produced Shahbaz before Accountability Judge Mohammad Bashir as the transit remand extended by the court on Oct 31 expired today.

The anti-corruption watchdog had requested the court for a further extension in the transit remand until Nov 12, while Shahbaz's counsel urged that the remand be extended until Nov 10.

During the hearing, the NAB counsel argued that transit remand did not include physical remand.

Shahbaz told the court that NAB officials had, however, already interrogated him during the previous transit remand and added that this should be mentioned in the court's order sheet.

"It's been one month but I still have not been told where I committed corruption," the PML-N president claimed.

Judge Bashir dismissed this argument, saying Shahbaz should mention this before the relevant court.

The court granted an extension in Shahbaz's transit remand until Nov 10.

Shahbaz's physical remand will expire on Nov 7 but due to extension in his transit remand, NAB will retain his custody and produce him before the accountability court in Lahore on Nov 10.

In the courtroom, Shahbaz offered his condolences to PML-N leader Abid Sher Ali on his mother's demise. He also met his brother Nawaz Sharif in Courtroom 2, where the latter had appeared for the hearing of Flagship corruption reference against him.

Allegations against Shahbaz

According to a NAB notice sent to the former Punjab chief minister on January 16, 2018, Shahbaz is accused of ordering the cancellation of award of contract of Ashiana-i-Iqbal to successful bidder Chaudhry Latif and Sons, and engineering the award of the contract to Lahore Casa Developers, a proxy group of Paragon City Private Limited, which resulted in the loss of approximately Rs193 million.

He is also accused of directing the Punjab Land Development Company (PLDC) to assign the Ashiana-i-Iqbal project to the Lahore Development Authority (LDA), resulting in the award of contract to Lahore Casa Developers, causing a loss of Rs715m and the ultimate failure of the project.

NAB has also accused Shahbaz of directing the PLDC to award the consultancy services of the Ashiana-i-Iqbal project to Engineering Consultancy Services Punjab (ECSP) for Rs 192m while the actual cost was supposed to be Rs35m as quoted by Nespak.

Box CEO Aaron Levie may have made his millions helping companies move their data to the cloud, but the 33-year-old founder still takes the time to sit down and read a book.

It's up to Levie, the leader of a 1,960-person workforce at a company that has $500 million in annual revenue and is valued at $3.75 billion, to set the tone at Box. So even though Levie is known widely as the funniest CEO in enterprise tech, it's no surprise that his books of choice are actually quite serious.

Speaking on stage at BoxWorks in August, Box's annual user conference, Levie shared two books he believes all the attendees should read and absorb.

"Powerful: Building a Culture of Freedom and Responsibility" by Patty McCord (2018)

McCord, who worked at Netflix from 1998 to 2012, stands against the old style of corporate human resources, which she sees as a waste of time. Instead, according to the book's description on Amazon, she "advocates practicing radical honesty in the workplace, saying goodbye to employees who don't fit the company's emerging needs, and motivating with challenging work, not promises, perks, and bonus plans."

Levie isn't the only person in Silicon Valley to take notice. McCord has gotten a lot of buzz since the book came out in early 2018. Arianna Huffington and Laurene Powell Jobs both endorsed McCord's book, as did Netflix CEO Reed Hastings.

"The Great Game of Business" by Jack Stack (1992)

"The Great Game of Business" first came to Levie's attention because McCord referenced it in "Powerful." It may have come out in 1992, but it continues to be influential today.

In the book, the longtime entrepreneur Jack Stack touts the idea of "open-book management," a style of office culture that loops everyone into the finances of the company so they know how things are going every step of the way. Stack's model of transparency and engagement was inspired by workers on the factory floor at International Harvester, which was going "down the tubes," the book's summary says.

But the book has found its way into the heart of Silicon Valley leaders as well. Stack, the founder and CEO of SRC Holdings Corporation, even managed to create a whole franchise around it, including coaches, classes, and events designed to teach the model.

Working with Business Analysts and Business Units to understand the requirements. We redefine access to credit with our revolutionary technology using machine...From Indeed - Tue, 06 Nov 2018 21:22:42 GMT - View all Montréal, QC jobs

We are looking for a WHMCS Module developer to drive forward future versions of our WHMCS module. This work is regular, we aim to release new versions every three months and the successful person will have the option to complete all of the future work on the module... (Budget: £5 - £10 GBP, Jobs: HTML, Linux, MySQL, PHP, WHMCS)

@nightgerbil - You wrote what you wrote. You set the tone and then you're surprised at the response. First a long and unnecessary screed about hating EVE Online, then a long and labored paragraph about Falcon, followed by another trying to goad an answer. Seriously, go re-read what you wrote and tell me that is reasonable discourse that one uses in expectation of a polite response. And you came here because Gevlon wrote that my picture with Falcon was a threat to him. Are you buying that as well? Put yourself in my shoes, random stranger, with that context, and tell me what you see.
But I guess you aren't the only stooge, so you have me there.
Like I said, I have been critical of Falcon as well, but Gevlon crying because he feels he has been singled out is just nonsense. Falcon has been a jerk repeatedly. Gevlon claiming he's corrupt and tilting the game against him is just his usual complaint about how any game where he doesn't win must be rigged, run by corrupt developers, or populated by morons and slackers. It is somehow never his fault.
And a comment like yours on his site would have simply been deleted rather than replied to.
@LazyE - See above. Gevlon is making bricks without straw here. His edifice of complaining falls over if you blow on it. It depends on a vast conspiracy of developers working against him.

We published this a few weeks ago for early voters but I figured I'd run it again-- with a bonus video at the end-- for Californians going to the polls on Tuesday. First and foremost on our list is the U.S. Senate race, in which we strongly back Kevin de León against fossilized conservative incumbent Dianne Feinstein.We don't usually back better of two evil candidates-- which is what the Democratic Party usually encourages, particularly on the federal level. But, this year, because of the existential threat from Trump, we are doing just that. Vote for every Democrat and against every Republican-- even for candidates as lacking in anything to recommend them as Andrew Janz and Gil Cisneros. There I said it!. That said, we are genuinely excited about some Democrats, especially Katie Porter, the progressive running in Orange County (CA-45), Ammar Campa-Najjar, the progressive running for Congress in San Diego County (CA-50) and Jovanka Beckles, the progressive running for state Assembly in the East Bay. All of our California faves are on the Blue America thermometer on the right. Prediction: Gavin Newsom will win and any progressive who voted for him will be very, very sorry. For state Superintendent of Schools, there's a really good candidate: Tony Thurmond and a really bad canddiate, the charter school guy, Marshall Tuck. OK, that's the easy stuff. Now the statewide propositions:

L.A. County has Measure W, an excellent idea to fund rainwater capture, cleaning and storage projects in order to grow the county's local water supply. Vote YES.And the city of L.A. has two measures, both worth supporting. Measure B amends the City’s charter to permit Los Angeles to establish a public bank. Vote YES.Measure E sets the City’s primary election on the same date as the State’s primary election. Vote YES.

As a full stack developer with the Russel Metals Software Applications team, you will design and develop technology critical to the operation of the company....From Indeed - Thu, 18 Oct 2018 13:08:06 GMT - View all Mississauga, ON jobs

This is a twelve (12) month contract position with a top tier organization, scheduled to begin immediately. Eagle is assisting our client in the search for a...From Eagle Professional Resources Inc - Tue, 06 Nov 2018 22:08:26 GMT - View all Toronto, ON jobs

Medical device developer Hypothermia Devices has raised $10.7 million in a new round of equity financing, according to a recently posed SEC filing. Money in the round came from 51 anonymous sources with the first sale dated on March 21, according to the filing. Los Angeles-based Hypothermia Devices is designing and manufacturing cooling and heating […]

The complaint is aimed at uncovering the full details of Russian money flowing to various Trump projects using so-called anonymous wealth companies.

A human rights organization has asked Dutch prosecutors to open a criminal investigation into multi-billion dollar money laundering schemes that they say were aided by Donald Trump’s lawyer, Rudy Giuliani, and his old law firm.

The complaint describes “one of the biggest fraud cases ever” in which “some of these money flows ultimately ended up in the Netherlands” because “Dutch service providers helped to cover up the money laundering acts.”

Watch David Cay Johnston’s Video Commentary Below:

“The money laundering network started in Kazakhstan, where a figure of up to USD 10 billion was purportedly embezzled,” the complaint asserts. “This money was subsequently circulated by two Kazakh oligarch families via a worldwide network of shell companies. A number of these companies were established in the Netherlands. The money was subsequently invested in real estate projects in the United States and Europe, after which it was paid out as ‘profits’ via – once again – a network of shell companies.”

Netherlands banks and other firms play a significant role in illicit flows of cash around the world through sophisticated techniques to hide income and corporate profits. Many of these techniques appear to push the envelope on legal tax avoidance. When money laundering is involved these aggressive techniques could cross a line into aiding and abetting criminal tax evasion.

The complaint asserts that a small slice of the missing billions was run through Dutch shell corporations with help from Rudy Giuliani’s old law firm, Bracewell & Giuliani. Until 2016, Giuliani was a partner in the 470-lawyer firm.

The complaint was filed by Avaaz, a global human rights organization in Washington which claims 48 million members. It has issued an open call to prosecutors around the world to investigate “the giant web of corruption” that it says propelled Trump’s rise.

Avaaz says it has approximately 290,000 members in the Netherlands. The complaint was filed Oct. 22 with J.J.M. van Dis-Setz of the Dutch Public Prosecution Service by Barbara van Straaten, a lawyer in Amsterdam.

The complaint filed with the Netherlands Public Prosecution Service in Amsterdam relies on court records from several countries that were dug up by investigative journalists, including James S. Henry, the investigative economics editor of DCReport.

The complaint is aimed at uncovering the full details of Russian money flowing to various Trump projects using so-called anonymous wealth companies. Those are shell companies created to hide the identities of the owners. Trump and his family are known to have received vast sums from shell companies and have bragged about how much of it came from people in Russia and other parts of the former Soviet empire. Trump contends the deals were all lawful and he has no knowledge of any money laundering.

A criminal investigation by Dutch prosecutors could help that country avoid banking sanctions and loss of reputation by showing that Amsterdam enforces its own laws and respects laws on transnational crimes.

Roughly $10 billion was stolen from Kazakhstan, a former Soviet satellite located in Central Asia. The current Kazakh government is in court in Switzerland and elsewhere trying to recover the money and prosecute members of two families it says stole the money and laundered it in the West. Other lawsuits connected to the stolen money are being litigated in London, Paris, New York and Los Angeles.

The $10 billion theft was uncovered by PricewaterhouseCoopers during its 2009 audit of BTA Bank, the largest in Kazakhstan. In addition, there is about $300 million missing from Almaty, the largest city in Kazakhstan.

Court documents identify the suspected thieves as Viktor Khrapunov and Mukhtar Ablyazov, oligarchs whose families are bound not just by extensive business ties, but also by marriage. Khrapunov is the former mayor of Almaty. His son Illyas is married to Ablyazov’s daughter.

“There are strong indications that the revenues of these crimes were probably mixed via a complex money-laundering network, and there was a great deal of mutual overlap” between the companies and people suspected of the crimes, the complaint states.

Both Khrapunov and Ablyazov are fugitives.

Khrapunov, who was tried in absentia in Kazakhstan, has been convicted of corruption.

Ablyazov, who was president of the looted bank, had his worldwide assets with an estimated value of $4.9 billion frozen six years ago by a British High Court.

Trump has done business since 1983 with Russian oligarchs and wealthy former officials and business people in former Soviet satellites, including Kazakhstan, Georgia and Azerbaijan. A number of mobsters – American, Russian and others – live in Trump Tower apartments. The building has long been known to local, federal and international law enforcement as a nest of criminal residences.

In 1987 the Kremlin, then still a communist state, provided Trump and his first wife Ivana with a luxury trip to Russia.

While a number of journalistic investigations have looked into Trump’s dealings with oligarchs and their money using public records and sources they were limited to public records, which are often scant. Dutch prosecutors, however, have the power to subpoena banking and other records, forcing their disclosure to prosecutors and, potentially, the public.

Trump is known to have done deals with some of those mentioned in the complaint, including Felix Sater, a violent Long Island felon who was born in Russia. For years Sater traveled extensively with Trump working on deals named in the complaint and handing out his Trump Organization business card. Despite these long ties and both videos and still photos showing the men together, Trump claimed during the presidential campaign that he would not recognize Sater if they were in the same room.

In one deal involving Sater, millions of dollars from the Trump SoHo hotel and apartment tower disappeared into an Icelandic bank that was under the control of a Russian oligarch. That bank was part of a multi-billion-dollar scheme to defraud Dutch and British pension funds. Trump has testified that he was due 18% of profits from the building.

Just three weeks before the 2016 election, a massive expose of Trump’s role in helping Kazakh oligarchs hide their illicit money appeared in The Financial Times, a British business newspaper.

“Dirty Money: Trump and the Kazakh Connection” described “evidence a Trump venture has links to alleged laundering network.”

The newspaper said its investigation found that Trump had “assembled an eclectic collection of backers and collaborators. Some had chequered pasts, with links to organized crime or fraud schemes. But perhaps the biggest risk for Mr. Trump’s complex, often opaque, business empire was that it might be used for a purpose US officials fear is rife in the country’s real estate sector: laundering dirty money.”

Trump’s SoHo project “has multiple ties to an alleged international money laundering network. Title deeds, bank records and correspondence show that a Kazakh family accused of laundering hundreds of millions of stolen dollars bought luxury apartments in a Manhattan tower part-owned by Mr. Trump and embarked on major business ventures with one of the tycoon’s partners,” the British newspaper The Financial Timesreported after an extensive investigation.

Trump and his partners, the FT asserted, engaged in condo sales that appear to have violated the Patriot Act, the post 9/11 law that requires banks, developers and others to know who their customers are and the sources of their money.

Investigations to identify anonymous buyers of luxury apartments in New York and Florida to determine the extent of any money laundering were announced in January 2016 by the federal government’s Financial Crimes Enforcement Network (FinCEN).

Three months later the FinCEN director, Jennifer Shasky Calvery, spoke about her years of work on transnational Russian organized crime networks. Much of her work involved Russian crime families “laundering their funds through the U.S. financial system. Often, this involved the suspected purchase of personal residences with criminal proceeds.”

While Giuliani calls himself Trump’s personal lawyer, his role is primarily to spread Trumpian disinformation about Special Prosecutor Mueller’s investigations into Russian collusion and related matters concerning Trump and his 2016 presidential campaign. Because Giuliani appears on so many cable and other television shows, but not in court, MSNBC host Lawrence O’Donnell calls the former New York City mayor “Trump’s TV lawyer.”

Usually the fact that the hog farm was there first, and the developer knowingly built the subdivision, and the homeowners knowingly bought (they might have a claim against the builder and/or the selling agent, for failure to disclosure in WRITING), then yes, tough luck!
The only trouble is, likely that land is still zoned agricultural and isn't that much of a "tax farm", whereas the housing subdivision is. So what will the local politicians, who are in the pocket of real estate developers, do? If they don't succeed in a zoning change or use restriction (which the hog farm owner must be vigilant for, else after a certain period of time w/o a challenge they're permanent), then they do things like sic the Health Department on the farmer, or even Animal Control, with false allegations of cruelty to the herd. There are numerous stories of abuses like this. It's a wonder more beleaguered property owners like this guy don't "Go Postal" and shoot up a County Commissioner's meeting!

MD-Rockville, job summary: Randstad Technologies, a global and national leader in the IT Staffing and Services industry has an immediate need for a Dynamics 365 Developer to sit onsite in Rockville, MD to join a highly talented and growing development team. This is a long term contract paying $80/hr. Applicants must be authorized to work in the United States without sponsorship. location: Rockville, Maryland jo

Work with a big data team at a global financial services company. At CGI, we understand that success in the services business is contingent on retaining and...From CGI - Tue, 06 Nov 2018 22:53:12 GMT - View all Montréal, QC jobs

My client, a leading supplier of clinical software for GP practice management systems with offices based in Bodmin & Plymouth are looking for a Software Manager. As a Software Manager you will be responsible for the management of the software department to ensure the delivery of the software products in the company’s catalogue. Knowledge and Skills • Educated to a degree level in Computer Science , Mathematics, Physics or similar or able to demonstrate proven track or experience and equivalent skills • Proven experience in Software Management and Software team management • Strong communication, listening, negotiation and facilitations skills: team leadership comfortable with conflict management. • Strong background in project management, training in project management would be desirable. • Good understanding of technologies C++,C#, .NET,SQL, PHP, WEB. • Proven experience as a software developer. Company benefits include: • 28 Days annual leave (inc. Bank Holidays) increasing to 33, pro-rata. • Pension scheme, • Health Care Plans, • Salary sacrifice bicycle schemes plus many more. For further information about this role please contact Daniel Nile on 01752 230010 or email daniel.nile@reedglobal.com Reed Specialist Recruitment Limited is an employment agency and employment business

Oracle Recruitment 2018 recently has released the vacancies for the posts of Software Developer. The given vacancies are for the Freshers candidates. Applicants who will be selected will be placed in Bangalore location. Interested and eligible candidates apply for the vacancies according to the desired qualification and get secured with Oracle Careers 2018. Oracle Careers […]

Over the past decade, the python programming language has exploded in popularity for all types of coding. From web developers to video game designers, from data scientists to in-house tool creators, many have fallen in love with Python. Why? Because Python is easy to learn, easy to use, and very powerful.

Want to learn Python programming? Here are some of the best resources and ways to learn Python online, many of which are entirely free. For optimal results, we recommend that you utilize ALL of these websites, as they each have their own pros and cons.

1. How to Think Like a Computer Scientist

One of the best Python tutorials on the web, the How to Think Like a Computer Scientist interactive web tutorial is great because it not only teaches you how to use the Python programming language, but also how to think like a programmer. If this is the first time you’ve ever touched code, then this site will be an invaluable resource for you.

Keep in mind, however, that learning how to think like a computer scientist will require a complete shift in your mental paradigm. Grasping this shift may be easy for some and difficult for others, but as long as you persevere, it will eventually click. And once you’ve learned how to think like a computer scientist, you’ll be able to learn programming languages other than Python with ease!

2. The Official Python Tutorial

What better place to learn Python than on the official Python website? The creators of the language itself have devised a large and helpful guide that walks you through the language basics.

The best part of this web tutorial is that it moves slowly, drilling specific concepts into your head from multiple angles to make sure you truly understand them before moving on. The website’s formatting is simple and pleasing to the eye, which just makes the whole experience that much easier.

If you have some background in programming, the official Python tutorial may be too slow and boring for you―but if you’re a brand newbie, you’ll likely find it to be an indispensable resource on your journey.

3. A Byte of Python

The A Byte of Python web tutorial series is awesome for those who want to learn Python and have a bit of previous experience with programming. The very first part of the tutorial walks you through the steps necessary to set up a Python interpreter on your computer, which can be a troublesome process for first timers.

There is one drawback to this website: it does try to dive in a bit too quickly. As someone with Python experience under my belt, I can see how newbies might be intimidated by how quickly the author moves through the language.

But if you can keep up, then A Byte of Python is a fantastic resource. If you can’t?Try some of the other Python tutorial websites in this list first, and once you have a better grasp of the language, come back and try this one again.

4. LearnPython

Unlike the previously listed Python tutorial sites, LearnPython is great because the website itself has a built-in Python interpreter. This means you can play around with Python coding right on the website, eliminating the need for you to muck around and install a Python interpreter on your system first.

Of course, you’ll need to install an interpreter eventually if you plan on getting serious with the language, but LearnPython actually lets you try Python before investing too much time setting up a language that you might end up not using.

LearnPython’s tutorial incorporates the interpreter, which allows you to play around with code in real-time, making changes and experimenting as you learn. The programming exercises at the end of each lesson are helpful, too.

5. Learn X in Y Minutes: Python 3

Let’s say you have plenty of programming experience and you already know how to think like a programmer, but Python is new to you and you just want to get to grips with the actual syntax of the language. In that case,Learn X in Y Minutes is the best website for you.

True to its name, this site lays out all of the syntactic nuances of Python in code format so that you can learn all of the important bits of Python’s syntax in under 15 minutes. It’s succinct enough to suffice as a reference―bookmark the page and come back to it whenever you forget a certain aspect of Python.

CodeWars isn’t so much a tutorial as it is a gamified way to test your programming knowledge . It consists of hundreds of different coding puzzles (called “katas”), which force you to take what you’ve learned from the aforementioned Python websites and apply them to real-life problems.

The katas on CodeWars are categorized by difficulty, and they do have an instructive quality to them, so you’ll definitely learn as you go through each puzzle. As you complete katas, you’ll “level up” and gain access to harder katas. But the best part? You can compare your solutions with solutions submitted by others, which will significantly accelerate your learning.

Though it has a relatively shallow learning curve, Python is a powerful language that can be utilized in multiple applications. Its popularity has grown consistently over the years, and there’s no indication that the language will disappear any time soon.

Still have questions? Check out our answers to the most frequently asked questions about Python programming The Most Frequently Asked Questions About Python Programming The Most Frequently Asked Questions About Python Programming In this article, we'll walk you through everything you need to know about Python as a beginner. Read More .

When writing code in python, it’s important to make sure that your code can be easily understood by others . Giving variables obvious names, defining explicit functions, and organizing your code are all great ways to do this.

Another awesome and easy way to increase the readability of your code is by using comments !

In this tutorial, you’ll cover some of the basics of writing comments in Python. You’ll learn how to write comments that are clean and concise, and when you might not need to write any comments at all.

You’ll also learn:
Why it’s so important to comment your code
Best practices for writing comments in Python
Types of comments you might want to avoid
How to practice writing cleaner comments

Free Bonus:5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you'll need to take your Python skills to the next level.

Why Commenting Your Code Is So Important

Comments are an integral part of any program. They can come in the form of module-level docstrings, or even inline explanations that help shed light on a complex function.

Before diving into the different types of comments, let’s take a closer look at why commenting your code is so important.

Consider the following two scenarios in which a programmer decided not to comment their code.

When Reading Your Own Code

Client A wants a last-minute deployment for their web service. You’re already on a tight deadline, so you decide to just make it work. All that “extra” stuff―documentation, proper commenting, and so forth―you’ll add that later.

The deadline comes, and you deploy the service, right on time. Whew!

You make a mental note to go back and update the comments, but before you can put it on your to-do list, your boss comes over with a new project that you need to get started on immediately. Within a few days, you’ve completely forgotten that you were supposed to go back and properly comment the code you wrote for Client A.

Fast forward six months, and Client A needs a patch built for that same service to comply with some new requirements. It’s your job to maintain it, since you were the one who built it in the first place. You open up your text editor and…

What did you even write?!

You spend hours parsing through your old code, but you’re completely lost in the mess. You were in such a rush at the time that you didn’t name your variables properly or even set your functions up in the proper control flow. Worst of all, you don’t have any comments in the script to tell you what’s what!

Developers forget what their own code does all the time, especially if it was written a long time ago or under a lot of pressure. When a deadline is fast approaching, and hours in front of the computer have led to bloodshot eyes and cramped hands, that pressure can be reflected in the form of code that is messier than usual.

Once the project is submitted, many developers are simply too tired to go back and comment their code. When it’s time to revisit it later down the line, they can spend hours trying to parse through what they wrote.

Writing comments as you go is a great way to prevent the above scenario from happening. Be nice to Future You!

When Others Are Reading Your Code

Imagine you’re the only developer working on a small Django project . You understand your own code pretty well, so you don’t tend to use comments or any other sort of documentation, and you like it that way. Comments take time to write and maintain, and you just don’t see the point.

The only problem is, by the end of the year your “small Django project” has turned into a “20,000 lines of code” project, and your supervisor is bringing on additional developers to help maintain it.

The new devs work hard to quickly get up to speed, but within the first few days of working together, you’ve realized that they’re having some trouble. You used some quirky variable names and wrote with super terse syntax. The new hires spend a lot of time stepping through your code line by line, trying to figure out how it all works. It takes a few days before they can even help you maintain it!

Using comments throughout your code can help other developers in situations like this one. Comments help other devs skim through your code and gain an understanding of how it all works very quickly. You can help ensure a smooth transition by choosing to comment your code from the outset of a project.

How to Write Comments in Python

Now that you understand why it’s so important to comment your code, let’s go over some basics so you know how to do it properly.

Python Commenting Basics

Comments are for developers. They describe parts of the code where necessary to facilitate the understanding of programmers, including yourself.

To write a comment in Python, simply put the hash mark # before your desired comment:

# This is a comment

Python ignores everything after the hash mark and up to the end of the line. You can insert them anywhere in your code, even inline with other code:

print("This will run.") # This won't run

When you run the above code, you will only see the output This will run. Everything else is ignored.

Comments should be short, sweet, and to the point. While PEP 8 advises keeping code at 79 characters or fewer per line, it suggests a max of 72 characters for inline comments and docstrings. If your comment is approaching or exceeding that length, then you’ll want to spread it out over multiple lines.

Python Multiline Comments

Unfortunately, Python doesn’t have a way to write multiline comments as you can in languages such as C, Java, and Go:

# So you can't
just do this
in python

In the above example, the first line will be ignored by the program, but the other lines will raise a Syntax Error.

In contrast, a language like Java will allow you to spread a comment out over multiple lines quite easily:

/* You can easily
write multiline
comments in Java */

Everything between /* and */ is ignored by the program.

While Python doesn’t have native multiline commenting functionality, you can create multiline comments in Python. There are two simple ways to do so.

The first way is simply by pressing the return key after each line, adding a new hash mark and continuing your comment from there:

def multiline_example():
# This is a pretty good example
# of how you can spread comments
# over multiple lines in Python

Each line that starts with a hash mark will be ignored by the program.

Another thing you can do is use multiline strings by wrapping your comment inside a set of triple quotes:

"""
If I really hate pressing `enter` and
typing all those hash marks, I could
just do this instead
"""

This is like multiline comments in Java, where everything enclosed in the triple quotes will function as a comment.

While this gives you the multiline functionality, this isn’t technically a comment. It’s a string that’s not assigned to any variable, so it’s not called or referenced by your program. Still, since it’ll be ignored at runtime and won’t appear in the bytecode, it can effectively act as a comment. (You can take a look at this article for proof that these strings won’t show up in the bytecode.)

However, be careful where you place these multiline “comments.” Depending on where they sit in your program, they could turn into docstrings , which are pieces of documentation that are associated with a function or method. If you slip one of these bad boys right after a function definition, then what you intended to be a comment will become associated with that object.

Be careful where you use these, and when in doubt, just put a hash mark on each subsequent line. If you’re interested in learning more about docstrings and how to associate them with modules, classes, and the like, check out our tutorial on Documenting Python Code .

Python Commenting Shortcuts

It can be tedious to type out all those hash marks every time you need to add a comment. So what can you do to speed things up a bit? Here are a few tricks to help you out when commenting.

One of the first things you can do is use multiple cursors. That’s exactly what it sounds like: placing more than one cursor on your screen to accomplish a task. Simply hold down the Ctrl or Cmd key while you left-click, and you should see the blinking lines on your screen:

This is most effective when you need to comment the same thing in several places.

What if you’ve got a long stretch of text that needs to be commented out? Say you don’t want a defined function to run in order to check for a bug. Clicking each and every line to comment it out could take a lot of time! In these cases, you’ll want to toggle comments instead. Simply select the desired code and press Ctrl + / on PC, or Cmd + / on Mac:

All the highlighted text will be prepended with a hash mark and ignored by the program.

If your comments are getting too unwieldy, or the comments in a script you’re reading are really long, then your text editor may give you the option to collapse them using the small down arrow on the left-hand side:

Simply click the arrow to hide the comments. This works best with long comments spread out over multiple lines, or docstrings that take up most of the start of a program.

Combining these tips will make commenting your code quick, easy, and painless!

Python Commenting Best Practices

While it’s good to know how to write comments in Python, it’s just as vital to make sure that your comments are readable and easy to understand.

Take a look at these tips to help you write comments that really support your code.

When Writing Code for Yourself

You can make life easier for yourself by commenting your own code properly. Even if no one else will ever see it, you’ll see it, and that’s enough reason to make it right. You’re a developer after all, so your code should be easy for you to understand as well.

One extremely useful way to use comments for yourself is as an outline for your code. If you’re not sure how your program is going to turn out, then you can use comments as a way to keep track of what’s left to do, or even as a way of tracking the high-level flow of your program. For instance, use comments to outline a function in pseudo-code:

from collections import defaultdict
def get_top_cities(prices):
top_cities = defaultdict(int)
# For each price range
# Get city searches in that price
# Count num times city was searched
# Take top 3 cities & add to dict
return dict(top_cities)

These comments plan out get_top_cities() . Once you know exactly what you want your function to do, you can work on translating that to code.

Using comments like this can help keep everything straight in your head. As you walk through your program, you’ll know what’s left to do in order to have a fully functional script. After “translating” the comments to code, remember to remove any comments that have become redundant so that your code stays crisp and clean.

You can also use comments as part of the debugging process. Comment out the old code and see how that affects your output. If you agree with the change, then don’t leave the code commented out in your program, as it decreases readability. Delete it and use version control if you need to bring it back.

Finally, use comments to define tricky parts of your own code. If you put a project down and come back to it months or years later, you’ll spend a lot of time trying to get reacquainted with what you wrote. In case you forget what your own code does, do Future You a favor and mark it down so that it will be easier to get back up to speed later on.

When Writing Code for Others

People like to skim and jump back and forth through text, and reading code is no different. The only time you’ll probably read through code line by line is when it isn’t working and you have to figure out what’s going on.

In most other cases, you’ll take a quick glance at variables and function definitions in order to get the gist. Having comments to explain what’s happening in plain English can really assist a developer in this position.

Be nice to your fellow devs and use comments to help them skim through your code. Inline comments should be used sparingly to clear up bits of code that aren’t obvious on their own. (Of course, your first priority should be to make your code stand on its own, but inline comments can be useful in this regard.)

If you have a complicated method or function whose name isn’t easily understandable, you may want to include a short comment after the def line to shed some light:

This can help other devs who are skimming your code get a feel for what the function does.

For any public functions, you’ll want to include an associated docstring, whether it’s complicated or not:

def sparsity_ratio(x: np.array) -> float:
"""Return a float
Percentage of values in array that are zero or NaN
"""

This string will become the .__doc__ attribute of your function and will officially be associated with that specific method. The PEP 257 docstring guidelines will help you to structure your docstring. These are a set of conventions that developers generally use when structuring docstrings.

The PEP 257 guidelines have conventions for multiline docstrings as well. These docstrings appear right at the top of a file and include a high-level overview of the entire script and what it’s supposed to do:

# -*- coding: utf-8 -*-
"""A module-level docstring
Notice the comment above the docstring specifying the encoding.
Docstrings do appear in the bytecode, so you can access this through
the ``__doc__`` attribute. This is also what you'll see if you call
help() on a module or any other Python object.
"""

A module-level docstring like this one will contain any pertinent or need-to-know information for the developer reading it. When writing one, it’s recommended to list out all classes, exceptions, and functions as well as a one-line summary for each.

Python Commenting Worst Practices

Just as there are standards for writing Python comments, there are a few types of comments that don’t lead to Pythonic code. Here are just a few.

Avoid: W.E.T. Comments

Your comments should be D.R.Y. The acronym stands for the programming maxim “Don’t Repeat Yourself.” This means that your code should have little to no redundancy. You don’t need to comment a piece of code that sufficiently explains itself, like this one:

return a # Returns a

We can clearly see that a is returned, so there’s no need to explicitly state this in a comment. This makes comments W.E.T., meaning you “wrote everything twice.” (Or, for the more cynical out there, “wasted everyone’s time.”)

W.E.T. comments can be a simple mistake, especially if you used comments to plan out your code before writing it. But once you’ve got the code running well, be sure to go back and remove comments that have become unnecessary.

Avoid: Smelly Comments

Comments can be a sign of “code smell,” which is anything that indicates there might be a deeper problem with your code. Code smells try to mask the underlying issues of a program, and comments are one way to try and hide those problems. Comments should support your code, not try to explain it away. If your code is poorly written, no amount of commenting is going to fix it.

Let’s take this simple example:

# A dictionary of families who live in each city
mydict = {
"Midtown": ["Powell", "Brantley", "Young"],
"Norcross": ["Montgomery"],
"Ackworth": []
}
def a(dict):
# For each city
for p in dict:
# If there are no families in the city
if not mydict:
# Say that there are no families
print("None.")

This code is quite unruly. There’s a comment before every line explaining what the code does. This script could have been made simpler by assigning obvious names to variables, functions, and collections, like so:

By using obvious naming conventions, we were able to remove all unnecessary comments and reduce the length of the code as well!

Your comments should rarely be longer than the code they support. If you’re spending too much time explaining what you did, then you need to go back and refactor to make your code more clear and concise.

Avoid: Rude Comments

This is something that’s likely to come up when working on a development team. When several people are all working on the same code, others are going to be going in and reviewing what you’ve written and making changes. From time to time, you might come across someone who dared to write a comment like this one:

# Put this here to fix Ryan's stupid-a** mistake

Honestly, it’s just a good idea to not do this. It’s not okay if it’s your friend’s code, and you’re sure they won’t be offended by it. You never know what might get shipped to production, and how is it going to look if you’d accidentally left that comment in there, and a client discovered it down the road? You’re a professional, and including vulgar words in your comments is not the way to show that.

How to Practice Commenting

The simplest way to start writing more Pythonic comments is just to do it!

Start writing comments for yourself in your own code. Make it a point to include simple comments from now on where necessary. Add some clarity to complex functions, and put a docstring at the top of all your scripts.

Another good way to practice is to go back and review old code that you’ve written. See where anything might not make sense, and clean up the code. If it still needs some extra support, add a quick comment to help clarify the code’s purpose.

This is an especially good idea if your code is up on GitHub and people are forking your repo. Help them get started by guiding them through what you’ve already done.

You can also give back to the community by commenting other people’s code. If you’ve downloaded something from GitHub and had trouble sifting through it, add comments as you come to understand what each piece of code does.

“Sign” your comment with your initials and the date, and then submit your changes as a pull request. If your changes are merged, you could be helping dozens if not hundreds of developers like yourself get a leg up on their next project.

Conclusion

Learning to comment well is a valuable tool. Not only will you learn how to write more clearly and concisely in general, but you’ll no doubt gain a deeper understanding of Python as well.

Knowing how to write comments in Python can make life easier for all developers, including yourself! They can help other devs get up to speed on what your code does, and help you get re-acquainted with old code of your own.

By noticing when you’re using comments to try and support poorly written code, you’ll be able to go back and modify your code to be more robust. Commenting previously written code, whether your own or another developer’s, is a great way to practice writing clean comments in Python.

As you learn more about documenting your code, you can consider moving on to the next level of documentation. Check out our tutorial on Documenting Python Code to take the next step.

Note: if for some reason you are using auth v2 and not v3, you can drop user_domain_name and project_domain_name.

You should be able to use your heat client now. Let’s test it.

List Stacks

for stack in client.stacks.list():

print(stack)

Stack
{

u’description’
:
u”
,

u’parent’
:
None
,

u’deletion_time’
:
None
,

u’stack_name’
:
u’default’
,

u’stack_user_project_id’
:
u’48babe632349f9b87ac3513′
,

u’stack_status_reason’
:
u’StackCREATEcompletedsuccessfully’
,

u’creation_time’
:
u’2018-10-25T17
:
02
:
52
Z’
,

u’links’
:
[

{

u’href’
:
u’https
:
//my-server’
,

u’rel’
:
u’self’

}

]
,

u’updated_time’
:
None
,

u’stack_owner’
:
None
,

u’stack_status’
:
u’CREATE_COMPLETE’
,

u’id’
:
u’b90d0e57-05a8-4700-b2f9-905497abe673′
,

u’tags’
:
None

}

>

The list method provides us with a generator that returns Stack objects. Each Stack object contains plenty of information. Information like the name of the stack, if it’s a nested stack you’ll get details on the parent stack, creation time and probably the most useful one stack status which allows us to check if the stack is ready to use.

Create a Stack

In order to create a stack, we first need a template that will define how our stack would look like. I’m going to assume here that you read the template guide and you have a basic (or complex) template ready for use.

To load a template, heat developers have provided us with the get_template_content
method

We created a dictionary with the required parameters and passed it to the stack create method. When more parameters added to your template, all you need to do is to extend the ‘parameters’ dictionary, without modifying the create call.

Inspect stack resources

Inspecting the stack as we previously did, might not be enough in certain scenarios. Imagine you want to use some resources as soon as they ready, regardless of overall stack readiness. In that case, you’ll want to check what is the status of a single resource. The following code will allow you to achieve that

A python @property decorator lets a method to be accessed as an attribute instead of as a method with a '()'
. Today, you will gain an understanding of why is it really needed, in what situations you can use it and how to actually use it.

Contents

2. When to use @property?

3. The setter method When to use it and How to define one?
1. Introduction

In well-written python code, you might have noticed a @property
decorator just before the method definition. In this guide, you will understand clearly what exactly the python @property
does, when to use it and how to use it. This guide, however, assumes that you have a basic idea about what python classes are. Because the @property
is typically used inside one.

So, what does the @property do?

The @property lets a method to be accessed as an attribute instead of as a method with a '()'
. But why is it really needed and in what situations can you use it?

To understand this, let’s create a Person
class that contains the first
, last
and fullname
of the person as attributes and has an email()
method that provides the person’s email.

Here is a fun fact about python classes: If you change the value of an attribute inside a class, the other attributes that are derived from the attribute you just changed don’t automatically update.

For example: By changing the self.last
name you might expect the self.full
attribute, which is derived from self.last
to update. But unexpectedly it doesn’t. This can provide potentially misleading information about the person
.

However, notice the email()
works as intended, eventhough it is derived from self.last
.

Since we are using person.fullname()
method with a '()'
instead of person.fullname
as attribute, it will break whatever code that used the self.fullname
attribute. If you are building a product/tool, the chances are, other developers and users of your module used it at some point and all their code will break as well.

So a better solution (without breaking your user’s code) is to convert the method as a property by adding a @property
decorator before the method’s definition. By doing this, the fullname()
method can be accessed as an attribute instead of as a method with '()'
. See example below.

Your users are going to want to change the fullname
property at some point. And by setting it, they expect it will change the values of the first
and last
names from which fullname
was derived in the first place.

But unfortunately, trying to set the value of fullname
throws an AttributeError
.

There you go. We set a new value to person.fullname
, the person.first
and person.last
updated as well. Our Person
class will now automatically update the derived attributes (property) when one of the base attribute changes and vice versa.

4. Conclusion

Hope the purpose of @property is clear and you now know when and how to use it. If you did, congratulations! I will meet you in the next one.

It’s been over a year since Polyphony Digital‘s game was released, and the developers continue to expand it further and further! Gran Turismo Sport, which was announced at Paris Games Week in 2015, is now at its 1.29 update. You might be already having this downloading on your console by the time you read this bunch of characters, as the update is set to release on November 6. The November update brought more than just cars. Nine cars were added to the game – a list follows: Ferrari GTO ’84, Pagani Zonda R ’09, Maserati GranTurismo S ’08, Honda EPSON NSX ’08, Lexus PETRONAS TOM’S SC430 ’08, Nissan XANAVI NISMO GT-R ’08, MINI Cooper S ’05, Jaguar E-type Coupé ’61, Subaru Impreza 22B STi Version…

San Francisco—The Electronic Frontier Foundation (EFF) launched a virtual reality (VR) experience on its website today that teaches people how to spot and understand the surveillance technologies police are increasingly using to spy on communities.

“We are living in an age of surveillance, where hard-to-spot cameras capture our faces and our license plates, drones in the sky videotape our streets, and police carry mobile biometric devices to scan people’s fingerprints,” said EFF Senior Investigative Researcher Dave Maass. “We made our ‘Spot the Surveillance’ VR tool to help people recognize these spying technologies around them and understand what their capabilities are."

Spot the Surveillance, which works best with a VR headset but will also work on standard browsers, places users in a 360-degree street scene in San Francisco. In the scene, a young resident is in an encounter with police. Users are challenged to identify surveillance tools by looking around the scene. The experience takes approximately 10 minutes to complete.

The surveillance technologies featured in the scene include a body-worn camera, automated license plate readers, a drone, a mobile biometric device, and pan-tilt-zoom cameras. The project draws from years of research gathered by EFF in its Street-Level Surveillance project, which shines a light on how police use, and abuse, technology to spy on communities.

Created by EFF’s engineering and design team, the Stop the Surveillance VR experience can be found at https://eff.org/spot-vr.

“One of our goals at EFF is to experiment with how emerging online technologies can help bring about awareness and change,” said EFF Web Developer Laura Schatzkin, who coded the project. “The issue of ubiquitous police surveillance was a perfect match for virtual reality. We hope that after being immersed in this digital experience users will acquire a new perspective on privacy that will stay with them when they remove the headset and go out into the real world.”

The current version is now being made publicly available for user testing, as part of the Aaron Swartz Day and International Hackathon festivities. EFF will be conducting live demonstrations of the project at the event on Nov. 10-11 at the Internet Archive in San Francisco. Swartz, the brilliant activist and Internet pioneer, was facing a myriad of federal charges for downloading scientific journals when he took his own life in 2013.

EFF seeks user feedback and bug reports, which will be incorporated into an updated version scheduled for release in Spring 2019. The VR project was supported during its development through the XRstudio residency program at Mozilla. The project was also made possible with the support of a 2018 Journalism 360 Challenge grant. Journalism 360 is a global network of storytellers accelerating the understanding and production of immersive journalism. Its founding partners are the John S. and James L. Knight Foundation, Google News Initiative, and the Online News Association.

This post is a response to this month’s T-SQL Tuesday #108 prompt by Malathi Mahadevan. T-SQL Tuesday is a way for the SQL Server community to share ideas about different database and professional topics every month.

This month’s topic asks to share how we learn skills other than SQL Server.

I enjoy learning to do new things; there’s a major sense of accomplishment I feel when I can tell myself, “Wow, I went from knowing almost nothing to being able to have this new skill.”

Over the years I’ve realized I’m pretty consistent in how I go about learning something new, so what follows is my process for learning a new skill.

What I Am Learning

Recently, my non-SQL Server related learning goals have been to learn to use plain old vanilla JavaScript.

In this case I’m not necessarily starting from nothing (I have been writing JavaScript for close to 20 years now…) but previously where it was necessary to use a library like jQuery to get any kind of compatibility and consistency across browsers, the time has finally come where the JavaScript (ECMAScript) standard is mostly implemented correctly in most modern browsers. No more need for large helper libraries!

And so the appeal here is that if I can ditch the overhead of a large library, my code will be simpler, easier to maintain, and faster to load and execute.

Steps to Learning a New Skill:

1. Commitment

For
me, the hardest part to learning a new skill is time management: if I don’t
make time for it, it won’t happen on its own.

I think the easiest way to make time to learn a new skill is to incorporate it into a project at work. By aligning it with your day job, you’re guaranteeing some time to work on it most days. Yes, critical projects and deadlines do come up where learning has to be set aside temporarily, but if you can find a project that doesn’t have urgent deadlines AND aligns with learning goals, then you’ll be good to go.

For
me, learning vanilla JavaScript is a great “at-work” project since
I’m already developing a lot of web apps with JavaScript anyway – the main
difference is I’ll be using the standard JavaScript functionality instead of
working through a library like jQuery.

Now obviously this won’t work in all scenarios: if you want to learn to build drones and you do development work for a chain of grocery stores, you probably can’t figure out a way to align your interest with work (unless of course your company is trying to build out a drone delivery service).

Inthat case, you need to set aside time at home. This essentially comes down to your own discipline and timemanagement. The key here is that youneed to set up time regularly and set yourself deadlines. Instead of having the deadline of a workproject to help motivate you to learn, you need to tell yourself “I’mgoing to get this chunk of plastic and copper wiring up in the air by the endof the month” and try to deliver on that goal.

2. Go Cold Turkey

This
is the hardest part of kicking any old habit.
Ideally when learning something new, I like to use it exclusively in all
scenarios where I can.

This may not always be possible: sometimes there is a deadline you have to meet and trying a new skill that slows you down is not always the best idea.

But even if that’s your scenario, pick at least one project to go completely cold turkey on for learning your new skill. Going cold turkey on a project will force you to work through the hurdles and actually learn the necessary skills.

Thiscan be challenging. I have the jQuerysyntax and methods ingrained in my brain from years of use; switching to usingstandard JavaScript is tough because I’m frequently having to look up how to dothings. But if I picked the rightproject (ie. one without urgent deadlines), then this becomes a fun learningexperience instead of something stressful.

3. Build a Collection of Resources

The internet is awesome: it contains nearly all of the information you could ever want for learning a new skill. The internet can also be a terrible place for learning a new skills if used incorrectly.

When learning something new, I try to find resources that guide me through a topic. Whether it’s a book, a website with a structured guide, a video course, or documentation with clear examples, it’s important to find something that will teach you the why as well as the how. I prefer using something with structure because it helps me learn the fundamentals correctly.

What I don’t like doing is searching for each question I have on StackOverflow. Don’t get me wrong, I love StackOverflow, but when learning some brand new skill I don’t think it always provides the best answers. Sometimes you get good answers, but sometimes you’ll come across answers that, while technically correct, apply to some edge case, old version of a language, etc… that make them less-than-helpful when learning a new skill.

4. Document and Share

As I learn, I document what I learn. This could be as simple as code snippets that I find myself using all the time, or it could be links to additional helpful resources.

Eventually I like writing up what I’m learning. Even if no one reads it, summarizing your thoughts and ideas will help clarify and retain them better. A summarized document or blog post also provides an additional reference for you to use if you need to in the future.

I
haven’t been blogging publicly about my JavaScript learning, but I have been
taking notes and sharing with individuals who are learning along with me.

5. Rinse and Repeat

That’s it! On my first pass at learning a new skill I try to finish a small project to get some immediate satisfaction. I then pick a new project that’s a little bit more ambitious, but still be manageable because I now have some knowledge that I didn’t have before starting my first project.

Baby steps. A little bit each day (or every other day). Do it enough times and eventually you find yourself being fully capable of whatever skill you set out to learn.

When you hear Python, Django is the first framework that comes to your mind. It is undoubtedly the most popular framework that eases your course as a developer. There’s clearly a number of reasons why Django Development Services has the maximum support from developers. Here we will list out a few that should get you on-board Django too.
Django is fast. What does this statement mean? When you want to develop an application with this framework, the time taken is less, the efficiency high Read More..

Mobile applications became everything these days. If you use a smartphone you will come to know the value of a mobile app. Mobile apps are there for almost every necessity you need from groceries to health care. Many people are coming with new ideas to develop a mobile app and everyone should know that the best mobile app development comes from the customer point of view but not the businessman point of view. Of course, every mobile app developers work for the requirements of their clients but Read More..

Xamarin, as you all know, is a framework which helps mobile app developers to develop different kinds of mobile applications for iOS, Android and Universal Windows Platform. It is usually written in XAML or C# languages. As the languages like Java, Swift and Objective C are easy to learn, developers find Xamarin easy to learn. It is also possible for the developers to use visual studio set up and code cross-platform application in XAML or C#.As the code of C# is compiled into native code, your Read More..

Working on technologies that save lives, secure property, and make the world a safer place. At Avigilon we are helping solve some of the biggest challenges...From Avigilon - Tue, 30 Oct 2018 22:28:02 GMT - View all Vancouver, BC jobs

Experience in working with apple TV. The role would involve working on a variety of Android applications for iPhone, iPad, apple TV and iWatch....From Ascentspark Software - Sun, 14 Oct 2018 06:27:34 GMT - View all Kolkata, West Bengal jobs

A web design company is filling a position for a Telecommute WordPress Support Superhero.
Core Responsibilities of this position include:
Supporting our awesome members and customers
Assisting with and solving all manner of WordPress questions, with style
Coordinating with developers over bugs, features and cool new stuff
Required Skills:
A really good familiarity with WordPress
Keen on working in an expanding, motivated, distributed support team
Love impressive response times, typing speed and the ability to really bang good stuff out
Ability to code (PHP/MySQL and/or HTML/CSS) a bit, or a lot, even better

A staffing company is seeking a Telecommute Software Architect.
Individual must be able to fulfill the following responsibilities:
Creating solutions architecture, algorithms, and designs for solutions
Leading weekly technical delivery of one or more products
Achieving an expert level understanding of our customers' environments
Must meet the following requirements for consideration:
At least 4 years experience in a CICD environment
5+ years of experience as both a hands-on architect and a software engineer
Have a minimum of 2 years experience leading or mentoring other more junior developers
Bachelor's Degree in Computer Science, Electrical Engineering, or Computer Engineering
5+ years experience in writing unit tests
Leadership experience in creating, deploying, and iterating excellent software

A staffing company is in need of a Telecommute Front End Software Architect.
Candidates will be responsible for the following:
Keeping track of your daily and weekly output
Escalating issues to your manager as soon as a risk is identified or as soon as you are blocked in your work
Working within one of our engineering factory teams and meet your goals, which are set by your manager
Qualifications Include:
Experienced using Jira and Git
Bachelor's Degree in Computer Science, Electrical Engineering, or Computer Engineering
5+ years of experience as a software engineer using latest Front-end technologies
1+ years experience working in an environment where CI/ CD tools are used
Must have a minimum of 2 years experience leading or mentoring junior developers
2+ years experience in writing unit tests

Welcome to «Urban Spokes,» a secret sex club for San Francisco«s hottest, horniest, most intense cyclists. Award-winning director Tony Dimarco captures the underground network of big-dicked, fast-moving bike studs who work hard and play hard. When Rod Peterson»s chain breaks on a ride, Ryan Rose offers to help him fix it at the Spokes headquarters, but not before Ryan uses both of Rod«s holes! Messenger Griffin Barrows is star struck meeting famous app developer Brent Corrigan, and Brent is so impressed by Griffin»s riding abilities, he extends an invitation to join Spokes. Connor Patricks nearly collides with Spokes member JJ Knight, but instead of being angry at each other, they take the opportunity to collide JJ«s massive cock into Connor»s willing ass. On Friday night, Rod, Griffin, and Connor make their way down to the Spokes shop for their initiation ritual at the mercy of Spokes leaders Ryan Rose and Trenton Ducati. Who will be the newest initiate into this elite riding club? Go for a ride with the «Urban Spokes» to find out!

A staffing agency has a current position open for a Telecommute Database Engineer.
Individual must be able to fulfill the following responsibilities:
Working with teams to resolve production issues, troubleshooting symptoms and more
Evaluating all monitoring alerts; differentiating between valid and false positive alerts
Following documented processes for troubleshooting, recovery, and service restoration processes
Required Skills:
Bachelor’s Degree in Computer Science, Electrical Engineering, or Computer Engineering
2+ years of experience as a hands-on database developer/administrator
1+ years of practical work experience using cloud services such as AWS
Exposure to at least one of the databases - Oracle / MSSQL / MySQL
Good knowledge of monitoring and tuning databases to provide high availability service
Good knowledge of database backup and recovery, export/import, tools and techniques

Experience in working with apple TV. The role would involve working on a variety of Android applications for iPhone, iPad, apple TV and iWatch....From Ascentspark Software - Sun, 14 Oct 2018 06:27:34 GMT - View all Kolkata, West Bengal jobs

KY-louisville, Experienced ETL Informatica Developer who will be interacting with business users and members of the technical team on the design and development of highly scalable ETL processes utilizing Informatica. • Minimum 5 years' hands-on experience developing ETL code with Informatica's PowerCenter software • Knowledge and experience with Erwin data modeling is a plus - 3 years of business/technology work

Lewd Sibling Duo features two main protagonists, boy  a prominent young biologist, and his older friend  a college professor. On a quest to rid the population of a terrible infertility plague that threatens the very existence of humanity, they discover that their solution has some interesting side effects on women.

In his re-election campaign’s final hours, Senator Ted Cruz (R-TX) is still deploying a smartphone app created by a software team at the heart of the Cambridge Analytica controversy.

The app, Cruz Crew, was developed by AggregateIQ, a small Canadian data firm that was for years the lead developer used by the infamous data analytics consultancy that made headlines last spring for harvesting user data on millions of unsuspecting Facebook users while working for the Trump campaign. Since that firm’s demise, AggregateIQ has become one focus of an international investigation into alleged data misdeeds during the 2016 Brexit campaign, and is the first company to be targeted by regulators under Europe’s new data privacy law.

The Cruz Crew app’s login screen. The app’s Facebook login was finally removed in June. [Image: Google Play]Both Cruz Crew as well as an app for Cruz’s presidential campaign in 2016 share an interconnected history of developers and clients linked to Cambridge Analytica, its British affiliate SCL Elections, and architects of the Republican Party’s recent digital efforts. Part of a group of apps presented as walled-garden social networks for political supporters, the software helps campaigns collect voter data and microtarget messages.

In April, Facebook announced it had suspended AggregateIQ over its possible improper access to the data of millions of Facebook users. But over a dozen apps made by AggregateIQ remained connected to Facebook’s platform until May and June, when Facebook belatedly took action against them.

A Facebook spokesperson told Fast Company that it was still investigating AIQ’s possible misuse of data, amid an ongoing investigation by Canadian prosecutors. The Cruz campaign did not respond to requests for comment.

David Carroll, a professor at Parsons School of Design at the New School in New York, who has brought a legal challenge against SCL and Cambridge Analytica for release of his voter data profile, said Cruz’s continued relationships with AggregateIQ highlighted problems with the use of data by a growing ecosystem of partisan election apps and databases. The risks are particularly high, he said, when the vendors are combining data from multiple sources and processing Americans’ data overseas.

“Despite the Cambridge Analytica fiasco, it seems that the Republican data machine is still a shadowy network that includes international operators, tangled up with vendors under intense scrutiny for unlawful conduct in multiple jurisdictions,” he said. “I don’t understand why Republicans don’t insist on working with domestic tech vendors and technologists who are U.S. citizens.”

The Cruz-Cambridge Analytica connection

During the 2016 race, a U.S.-based software firm named Political Social Media, but better known as uCampaign, was credited as developer and publisher for the official “Ted Cruz 2016” presidential primary app. At the time, the app achieved modest notoriety as a somewhat novel data collection tool– appearing alongside Cambridge Analytica under headlines like, “Cruz App Data Collection Helps Campaign Read Minds of Voters”–with the app colloquially referred to in the press as “Cruz Crew.”

As in 2016, the 2018 Cruz re-election campaign relies on constant polling and voter modeling to understand and target mainstream conservatives in Texas. Cruz and his Democratic challenger Beto O’Rourke, who has repeatedly brought up Cambridge Analytica during the campaign and has refused to use big data analytics, have both heavily invested in social media. The media blitz hasn’t been cheap: According to data from the Center for Responsive Politics, the candidates in the 2018 Texas Senate race have set the all-time record for most money spent in any U.S. Senate election.

As part of its digital push, the Cruz campaign rolled out a new app, officially named “Cruz Crew,” which awards points to users for tweeting pro-Cruz messages, volunteering, and taking part in other campaign activities. On the app’s pages in the Google and Apple stores, AggregateIQ is not mentioned, but its name is visible as the developer in the app URL and in internal code. The app’s publisher is listed as the political marketing agency WPA Intelligence, or WPAi.

Chris Wilson, WPAi’s founder and chief executive, is a veteran GOP pollster who previously worked for George W. Bush and Karl Rove. WPAi’s past campaign successes include a trio of high profile Tea Party-cum-Freedom Caucus sympathizer senators: Cruz, Mike Lee (R-UT), and Ron Johnson (R-WI). By far, however, Cruz has been WPA’s biggest political client in the U.S. Between his bids for senator and president, Cruz campaign committees have paid out over $4.3 million to Chris Wilson’s firm since 2011.

As the director of research, analytics and digital strategy for Cruz’s 2016 presidential campaign, Wilson oversaw a large data team that included Rebekah Mercer and Steve Bannon’s Cambridge Analytica. Rebekah’s father, Robert Mercer, footed the $5.8 million bill for Cambridge Analytica by doubling that amount in donations.

Wilson and the Cruz team have repeatedly said that Cambridge Analytica represented to the campaign that all of the data it had was legally obtained. They also claimed that Cambridge did not deliver the results expected of them, neither through their much-discussed psychographics work nor through an important piece of software called Ripon.

Despite being a deliverable promised by Cambridge Analytica, the work on Ripon was outsourced to AggregateIQ. More recently, WPAi hired the firm to develop and manage the software for Cruz Crew, along with its two other currently available apps: one for Texas Governor Greg Abbott’s re-election campaign, and one for Osnova, a Ukrainian political party dedicated to the long-shot presidential aspirations of its oligarch founder, Serhiy Taruta.

In the 2018 race, WPAi and the Cruz campaign have said Cruz’s effort isn’t using new Cambridge Analytica-style “psychographic” modeling, but it is using social media data for specific targeting, and relying on previous campaign data. “We use social data to ID voter groups in our core universes,” WPA’s Chris Wilson previously told Fast Company. “A lot of those are 2016 voters who we know are persuaded by specific messages.”

Cruz Crew and TedCruz.org currently share a privacy policy has barely changed since late 2015, when Cambridge Analytica and uCampaign were Cruz vendors. In both cases, the policy states that the campaign may “access, collect, and store personal information about other people that is available to us through your contact list,” match the info to data from other sources, and “keep track of your device’s geographic location.”

Beyond the existing campaign app, however, AIQ’s current involvement in the Cruz campaign’s data management and software development is unknown. A report by the New York Times last month found that when users shared their friends’ contact information with the Cruz app, that data was still being sent to AggregateIQ domains.

Wilson told the Times that his company, not AggregateIQ, received and controlled app users’ information. Representatives for AggregateIQ did not immediately respond to a request for comment, and WPAi did not respond to questions about the data firm.

Intelligence quotient

AIQ, founded in 2013 in Victoria, British Columbia, is currently under investigation in the U.K. and its homebase of Canada for electoral impropriety during the Brexit Leave campaign. The company’s name has come up repeatedly in parliamentary testimony for its alleged campaign finance and data protection misdeeds in connection with the parent company of Cambridge Analytica.

“Concerns have been raised about the closeness of the two organizations including suggestions that AIQ, [SCL Elections, and Cambridge Analytica] were, in effect, one and the same entity,” stated a recent report by the U.K.’s Information Commissioner’s Office.

In testimony to a U.K. parliamentary committee, former Cambridge Analytica executive Brittany Kaiser said that AggregateIQ was the exclusive digital and data engineering partner of SCL, the British parent affiliate of Analytica.

“They would build our software, such as a platform that we designed for Senator Ted Cruz’s campaign,” she said. “That was meant to collect data for canvassing individuals who would go door-to-door collecting and hygiening data of individuals in those households. We also had no internal digital capacity at the time, so we did not actually undertake any of our digital campaigns. That was done exclusively through AggregateIQ.”

AIQ founders Zack Massingham and Jeff Silvester had been brought into the fold a year prior by their friend Christopher Wylie, then an SCL employee, who blew the whistle on the firm’s practices earlier this year. According to Wylie, the founders registered their company in their hometown of Victoria as a result of an SCL contract, which subsequently led to political work in the Caribbean.

After the two firms first made contact in August 2013, while SCL was performing its first American political work in the Virginia gubernatorial race, AIQ designed solutions for deployment in campaigns under SCL’s supervision in Trinidad and Tobago. Part of the intent, according to records obtained by the Globe and Mail, was to harvest the internet histories of up to 1.3 million civilians in order to more accurately model their psychographics for message targeting.

In December 2013, an SCL employee proposed requesting the data from the country’s internet provider by posing as academic researchers, while seeking to tie internet addresses to billing addresses, without naming customers. In response, AIQ CEO Massingham replied by email that he could use every bit of data they could get. “If the billing addresses are obfuscated, we’ll have a difficult time relating things back to a real person or household,” he wrote. It remains unknown if that data was obtained.

The primary work AIQ performed was to design software that could be used to motivate volunteers, canvassers, and voters. This software concept was repeated for multiple clients, including Petronas, an oil company that sought to influence voters in Malaysia.

Campaign software developed by AIQ was used by Cambridge Analytica in U.S. elections and for clients like the oil giant Petronas. [Image: SCL]

AggregateIQ’s work across the pond

During the U.K.’s Brexit campaign in 2016, Vote Leave hired AIQ to place online ads, with AIQ paying for all 1,034 Facebook ads run by the campaign. AIQ’s services were also retained to develop and administer a piece of software that Vote Leave executives, including chief technology officer and former SCL employee Thomas Borwick, later credited with a large portion of the campaign’s success.

Vote Leave campaign director Dominic Cummings wrote an extensive blog post about the project, called the Voter Intention Collection System (VICS).

“One of our central ideas was that the campaign had to do things in the field of data that have never been done before,” Cummings wrote. “This included a) integrating data from social media, online advertising, websites, apps, canvassing, direct mail, polls, online fundraising, activist feedback . . . and b) having experts in physics and machine learning do proper data science in the way only they can, i.e. far beyond the normal skills applied in political campaigns.”

As the voter-facing front end for the Leave campaign data team, uCampaign was brought in and paid by AIQ to deliver the smartphone apps that helped to gather users’ cell numbers, email addresses, phone book contacts, and Facebook IDs for integration, exactly as it had done during the previous months for the Cruz 2016 campaign. Just as in that case, the app collected voter information for use in AIQ tools.

“We could only do this properly if we had proper canvassing software,” Cummings wrote. “We built it partly in-house and partly using an external engineer who we sat in our office for months.”

AIQ’s Zach Massingham repeatedly flew to the U.K. as his company was paid hundred of thousands of pounds for its Vote Leave work in 2016 after a series of transactions between several campaigns that Canadian officials have questioned as “money laundering” and British authorities are investigating as criminal offenses. Nonetheless, after the referendum, Cummings released an open-source version of VICS code on Github for future micro-targeters to use.

In early 2018, one of Vote Leave and SCL vet Thomas Borwick’s handful of data firms, Kanto, was hired to do canvassing and social media work during the Irish abortion referendum. Anti-abortion activist groups also contracted uCampaign to build two separateapps, which alarmed campaign finance and privacy watchdogs and led to a ban on internet advertising.

As with uCampaign, which has also made apps for the likes of Donald Trump and the NRA, AIQ’s smartphone apps were designed to gather information via Facebook Login, a tool offered by Facebook to streamline user registration across the internet. Though Facebook tightened some restrictions this year as a direct response to the Analytica flare-up, Login has allowed third-party developers to gain access to a wide range of Facebook account information about registered users.

As part of its investigation into Cambridge Analytica and its affiliates, on April 7, Facebook said that it had suspended AIQ, effectively ending its ability to deploy Facebook Login. However, security researcher Chris Vickery discovered that AIQ’s access to the Facebook platform was still active as of May 17. Additionally, he found, AIQ had already collected info on nearly 800,000 Facebook account IDs in a database, with many matched to addresses and phone numbers. Facebook removed more AIQ apps two weeks later, but it was not until June 19 that the Facebook Login feature was removed from the apps for Cruz, Osnova, and Abbott.

In written testimony to Parliament, AIQ chief technology officer Jeff Silvester, who visited British prime minister Theresa May’s office with Massingham in the weeks after the Brexit vote, explained the history of the relationship between SCL and AIQ, which began in late 2013.

After building a “customer relationship management (CRM) tool” for SCL in Trinidad and Tobago, AIQ created “an entirely new CRM tool” for the 2014 U.S. midterm elections. “SCL called the tool Ripon,” Silvester wrote. AIQ was then required to transfer all software rights to SCL before working “with SCL on similar software development, online advertising, and website development” in support of Cambridge Analytica’s work for the Ted Cruz 2016 campaign.

A referral from “an acquaintance who was working with Vote Leave” led to AIQ being hired by Vote Leave in April 2016, the day before the campaign was designated as the official Leave organization.

Silvester explained, “They sell their software that we create for them to whomever they like, and we just simply support that work.”

In March, WPAi CEO Chris Wilson told Gizmodo that he had almost no knowledge of the controversy surrounding AIQ, despite their work for the Cruz 2016 campaign. “I would never work with a firm that I felt had done something illegal or even unethical,” he said. The firm’s work for WPA was the result of a competitive bidding process, he said, and AIQ “offered us the best capabilities for the best price.”

Leaving the nest

In February 2017, a story on the Politico Pro website announced Archie, WPA Intelligence’s new piece of software for 2018 campaigns. The software goes by a nickname used by Texas Governor Greg Abbott’s political team, referring to Archimedes, the Greek mathematician who said, “Give me a lever and I can move the world.”

A diagram describing Archimedes, WPAi’s new campaign software [Image: WPAi]“The program allows campaigns to work across all formats and vendors to collect data in one place,” the article said, and campaign staffers “will be able to use the app to generate models, target audiences, cut lists, and produce data visualization tools to make strategic decisions.”

From that description, Archie sounded very much like AIQ’s Ripon and VICS all-in-one campaign solutions. AIQ’s smartphone app for WPAi client Greg Abbott first appeared on Google Play and Apple’s iOS Store three months later, in May 2017.

Archie’s predictive modeling of Texan voters “yielded approximately 4.5 million individual targets for turnout efforts,” according to WPAi. That helped the Abbott campaign win the 2018 Reed award for Best Use of Data Analytics/Machine Learning in Field Program. In attendance at the March ceremony were representatives from Cambridge Analytica, which was nominated for Best Use of Online Targeting for Judicial Campaign.

Three weeks after the Reed awards, Christopher Wylie’s whistleblower account in the Observer were splashed across the world’s front pages. By the following month, SCL and Analytica were claiming bankruptcy, and AIQ’s cofounders were appearing at Canadian Parliament and dealing with its suspension from Facebook as developers.

In June, a week before AIQ’s WPA apps finally removed Facebook Login, Silvester appeared before Canadian Parliament for a second time, where he was admonished by Vice Chair Nathaniel Erskine-Smith, who remarked, “Frankly, the information you have provided is inadequate.” After being threatened with a contempt charge for excusing himself from sworn testimony with a one-line doctor’s note, Massingham later spoke with the committee via audio-only link from his lawyer’s office.

In July, AggregateIQ was served with the U.K.’s first-ever enforcement notice under the EU’s new General Data Protection Regulation, known as GDPR. The U.K.’s Information Commissioner’s Office subjected AIQ to millions in fines if it did not “cease processing any personal data of U.K. or EU citizens obtained from U.K. political organizations or otherwise for the purposes of data analytics, political campaigning, or any other advertising purposes.”

After AIQ appealed the order, it was merely mandated to “erase any personal data of individuals in the U.K.,” though it was found to have “processed personal data in a way that the data subjects were not aware of, for purposes which they would not have expected, and without a lawful basis for that processing.”

As Ted Cruz wraps up his campaign, he continues to outsource part of his voter data harvesting to a foreign firm that has been blacklisted by Facebook and British and European regulators. The total data amassed through apps like Cruz Crew and projects like Ripon and Archimedes remains unknown, but they raise concerns that Cruz acknowledged when he launched his presidential campaign at Liberty University in March 2015. “Instead of a government that seizes your emails and your cell phones,” he said, “imagine a federal government that protected the privacy rights of every American.”

Jesse Witt (@witjest) is an independent researcher, writer, and filmmaker.

Ridiculous price of houses by the developers has been an issue to people of Sabah today. Many government and private workers bought houses from the same developer (Not Mentioned) promising good services and good quality of the houses offered. But it was such a disappoinment. High price. Low quality. Smaller than the showroom. But the developer promised the buyers that the houses built up were exactly like the showroom. So sad the maintenance monthly fee higher compared to other housing by another developers and could be said that this SAME DEVELOPER is famous for their expensive maintenance monthly fee and the worst services (including the management workers' act of rudeness) given to the people who are under their management. Made several complaints to the authorities but none of them responded. NOW, let's the petition speaks for the owner of UNIVERSITY UTAMA CONDOMINUM, JALAN KAYU MADANG, TELIPOK, KOTA KINABALU SABAH. (Credits to Panji Hitam for the images)

If you think the price of house selling today is ridiculously expensive, please sign this petition.

Combining the broad reach of Windows, best-in-class developer tools, a re-imagined user experience, and a built-in store, Windows 8 is the largest developer opportunity — ever.

Are you ready?Then join us for this free, full-day event filled with coding, sharing, plenty of food, and perhaps the occasional Lightning Talk on topics determined by your apps and questions.

FAQs

What is a hackathon?

These hackathons are a really fun way to get “down and dirty” with the technology and experience development along side others in the same room. It's an open Windows 8 code fest, where you’ll put what you know into practice and be eligible to win some great cash prizes! Code to your heart’s content, with Windows 8 experts available to guide you through every step of the process. It’s the perfect opportunity to get your dream application underway, or to finish that app you’ve already started.

What do I need to bring to the event?

You will need to bring a photo ID, your registration, a computer with Windows 8 and Visual Studio Express 2012 for Windows 8 (or any of the commercial editions of Visual Studio 2012), and your Windows 8 app idea (or a partially completed app, if you have one).

What are the prizes?

We have three cash prizes:

First place is $1000.00

Second place is $500.00

Third place is $250.00

Winners will be responsible for taxes (if any) and you must be present to win.

Who are the judges?

Judging will be performed by a panel of 3 judges (still being determined) and will be based application completeness and how well the application follows the Modern UI principles.

Who are the sponsors?

We wouldn't be able to host this event without our corporate sponsors. They are providing us everything from food to prize money.

(Please note that there is limited space available for this event, so be sure to register early.)

A new book on WCF was just published by Juval Lowy at IDesign. For those of you that don't know, Juval is Microsoft's Regional Director for the Silicon Valley area and has helped in the internal strategic design reviews for the .NET Framework. He has presented sessions at the last two Tech·Ed conferences on WCF and helped shape the technical strategy and direction for WCF with Microsoft.

I haven't picked up my copy yet, but will be getting one soon. The book focuses on the "why" behind particular design decisions in WCF and is a practical approach to building WCF enabled services.

There is also a new "Rough Cuts" edition of Learning WCF available by Michele Leroux Bustamante (also at IDesign). This book is aimed at the WCF beginning to intermediate programmer and focuses on the actual transmission (what happens on the wire) and interoperability techniques, while Juval's book is aimed at more advanced developers and focuses on the system side of developing WCF applications.

In any case, both of them should be good to add to your library. I know I will be adding them to mine.

VA-Roanoke, At Clarkston, you’re in good company. Capabilities mean more than titles. Years of experience are substituted with ability and drive. We serve clients – not customers. We hire leaders and innovators – not employees or staff. At Clarkston, we don’t just say we’re great, our clients and our people do too: • Clarkston is a leading business and technology professional services firm with experience ser

TX-Austin, Peak Performers is a nonprofit recruiting firm specializing in contract roles with state of Texas government agencies. We've partnered with the Health and Human Services Commission (HHSC) to recruit for a Salesforce Developer in Austin, Texas. Because of the nature of the information in the system that will be implemented, all entities must sign a Data Use Agreement (DUA) as a condition of employm

VA-Alexandria, Senior SQL Server Developer / Programmer Full time, permanent position Alexandria, VA Sorry, cannot sponsor H1b Visas at this time THE ROLE YOU WILL PLAY: As a Senior SQL Server Developer / Programmer, you will receive complex data from our member base in flat file formats, and design and write scripts to load, automate, connect, tune, and streamline programming processes for these various types o

TX-AUSTIN, Peak Performers is a nonprofit recruiting firm specializing in contract roles with state of Texas government agencies. We've partnered with the Texas General Land Office (GLO) to recruit for a Software Developer that will be a member of a team that is tasked with implementing a custom, browser-based, responsive software application used by a government agency to prevent and respond to oil spills.

All this while leveraging new display technologies such as 4K, HDR, wide-gamut, quantum dots and more! Who are we and what do we do?...From IRYStec Software - Wed, 25 Jul 2018 15:39:05 GMT - View all Montréal, QC jobs

IL-Chicago, OUR PROJECT Immediate opportunity to join our team with a worldwide leader in location-based services to build a location-data platform that will support cutting-edge mapping software and autonomous driving technology. This is a long-term contract position in downtown Chicago with competitive rates and opportunities for contract extensions. You can expect very competitive pay as our median compens

CA-Los Angeles, We are NGP, an award-winning IT staffing & consulting firm, and we're currently assisting one of Southern California's best & brightest software companies (headquartered in Pasadena, CA) to find a Senior .NET Developer for their full-time team. In this key role, we are seeking a proven Web Developer that brings strong experience working with C#, ASP.NET, MVC, Web Services and SQL Server for a grea

FL-Miami, Senior .NET Developer (Web App) - Contract - Miami, FL - $60-65 per hour Senior .NET Developer (Web App) - Contract - Miami, FL The end client is unable to sponsor or transfer visas for this position; all parties authorized to work in the US without sponsorship are encouraged to apply. * Assist with the design and build advanced applications * Must be a self-starter and be able to work well both i

Size: 566 MBCensorship: To is Absent/eat a patch for removalDeveloper/publisher: TemptationXXXPlatform: PC/Windows/MacOSEdition type: In developmentTablet: It isn't requiredVersion: Ep1 v1.0Language of a game (plot): EnglishInterface language: English

2 to 4 years of experience in full lifecycle of application development, database design, and implementation. Web development with ASP.NET (C# development with...From Canadian Tire - Fri, 21 Sep 2018 05:28:00 GMT - View all Brampton, ON jobs

Our client is a well-established leader in online sports gaming with a Technical Centre of Excellence at Yonge and Sheppard. They have engaged ROSS to help...From ROSS Recruitment - Sat, 29 Sep 2018 03:52:35 GMT - View all North York, ON jobs

NFL outfits are coming to Fortnite, thanks to a partnership of the football league and Epic Games, the developers of the game. Players will be able to purchase outfits to represent their favorite team from the Battle Royale Item Shop, and all 32 team outfits will be available. Additionally, up to eight outfits can be […]

Interested in being a part of a team that is leading the evolution of retail in Canada? Embracing and driving change is critical to our success....From Canadian Tire - Fri, 21 Sep 2018 05:28:00 GMT - View all Brampton, ON jobs

Advanced working knowledge of Java language. We’re looking for a software developer/tester that is eager to have significant influence on a massive technology...From Indeed - Thu, 01 Nov 2018 21:03:53 GMT - View all Toronto, ON jobs

DEVELOPER Pat Crean’s Marlet Property Group has hit out at Labour Party senator Aodhan O’Riordain over his involvement in local opposition to its plan to deliver 164 new homes on a site overlooking Howth harbour in Dublin. While the Baily Court scheme...

IA-Grimes, CTG has a QA Automation Engineer DIRECT HIRE Permanent position located in Grimes, IA. The QA Automation Engineer, will be involved throughout the solution development lifecycle. The primary responsibility will be in the quality assurance of continuous software delivery with a focus on functional test automation. Will work closely with developers as they implement features. Responsibilities: • Wor

3 Applications of Cryptocurrency Beyond Peer-to-Peer Payments

To be sure, digital currency markets have had their ups and downs. A recent study by Gallup shows that only 2 percent of investors are currently purchasing Bitcoin or other cryptocurrencies, but one in four is intrigued. With major banks betting on the space, however, that math may be about to change.

Whether or not your company accepts cryptocurrency as a payment method, it would do well to pay attention to the surprising ways the business world is already using digital currency:

1. Investing in customer loyalty.

Loyalty programs have long struggled to find the right incentive structure. According to the 2017 Colloquy Loyalty Census, more than half of loyalty memberships in the U.S. are inactive. The report says approximately 30 percent of surveyed U.S. and Canadian consumers have left loyalty programs without ever redeeming a point or a mile.

In Zurich, for example, Caffe Lattesso encourages purchasers to redeem codes found on its bottles for loyalty rewards in the form of digital coins, which can be exchanged within a few months for other digital tokens or traditional currency. EZ Rent-A-Car is following suit with a program that allows customers to exchange their loyalty points for digital coins.

2. Banking on accessory technologies.

Rather than re-invigorate their existing customer base, other entrepreneurs could look at building a new one around the cryptocurrency market. Investors may start focusing less on initial coin offerings and more on building the technological ecosystem around cryptocurrencies.

Demand is growing quickly for digital currency point-of-sale systems, for example. Although most of the demand is currently in South Korea, at least one company plans to distribute some 100,000 point-of-sale machines by 2021. Vendors that accept cryptocurrencies will also need accounting and reporting software to support the payment method.

3. Making change with ease.

But cryptocurrencies are good for more than spending money; they're also great for giving back. Eric Tippetts, co-founder of NASGO pointed out at the United Nations' Media for Social Change Summit, cryptocurrency's digital nature makes set-it-and-forget-it philanthropy possible.

"Instead of voicing a commitment to philanthropy, the blockchain makes it possible to program giving into the operation itself," Tippetts explained in a Cheddar interview. NASGO's financial systems, he noted, direct every seventeenth revenue cycle into an account for humanitarian contributions...

Software developers in the integration team contributes to the development of new features to support different models of cameras and IP video encoders, as...From Genetec - Mon, 13 Aug 2018 06:05:34 GMT - View all Montréal, QC jobs

Design and implement a project that demonstrates the capability of autonomous. That requires the completion of a co-op work term....From Canadian Tire - Thu, 30 Aug 2018 23:28:20 GMT - View all Kitchener, ON jobs

We want to see a strong portfolio with attention to UI and UX detail. Teton Gravity Research is in search of an experienced front-end developer....From Indeed - Wed, 17 Oct 2018 20:45:20 GMT - View all Jackson, WY jobs

Dutch Ridge Consulting Group (DRCG) is seeking a Software Developer to support our Government clients. This position is a corporate position supporting all of...From Dutch Ridge - Wed, 24 Oct 2018 21:43:06 GMT - View all Clarksburg, WV jobs

Fast-growing technology company is looking for an iOS Developer to help the team expand their ecommerce platform. This will be a long term opportunity to get...From Total Med - AWI Tech - Sat, 27 Oct 2018 04:20:09 GMT - View all Appleton, WI jobs

Fast-growing technology company is looking for an Android Developer to help the team expand their ecommerce platform. This will be a long term opportunity to...From Total Med - AWI Tech - Sat, 27 Oct 2018 04:20:09 GMT - View all Appleton, WI jobs

High sensibility to UI design and UX. Fast-growing technology company is looking for two React/NodeJS Developers to help the team expand their ecommerce...From Total Med - AWI Tech - Sat, 27 Oct 2018 04:20:08 GMT - View all Appleton, WI jobs

High sensibility to UI design and UX. Fast-growing technology company is looking for a Ruby on Rails Developer to help the team expand their ecommerce platform....From Total Med - AWI Tech - Sat, 27 Oct 2018 04:20:08 GMT - View all Appleton, WI jobs

Match-3 and dating simulator developer HuniePotDev has unveiled new character Brooke coming to HuniePop 2 who will surely please MILF enthusiasts. The fresh announcement comes directly from the developer’s Twitter: Fans have been expecting the return of the foul-mouthed Audrey. Now, that it has been revealed her aunt is in the game, we may see […]

Featured Job:

Scenic is currently recruiting for a newly created role of Manager, Scenic Eclipse Marketing the successful candidate will be required to develop and implement strong industry leading, marketing and sales strategies in order to meet the business objectives and sales targets. Click Here to Learn More.

Cruise Job Alerts - Sign Up for a Weekly Email of the Best Industry Jobs:

Julien Ball writes from San Francisco on the importance of a ballot measure that, if passed, would help to counter the corporate landlords and their gentrification plans.

CALIFORNIA VOTERS have a chance on Election Day to pass a ballot measure, known as Proposition 10, that would restore the right of cities to pass strong rent control.

Martha Simmons’ story illustrates why it would be an important victory for housing justice if Prop 10 passes.

Martha is one of the millions of California renters who have been negatively affected by the Costa-Hawkins Rental Housing Act of 1995 — the state law restricting local authorities from implementing rent control, which Proposition 10 would repeal if it passes.

Martha grew up in San Francisco’s Bayview-Hunters Point neighborhood, and her family has roots in the historically working-class, African American area for several generations. Thus, in 2005, she was happy to move into a home on Ingerson and 3rd Street that was big enough for her large, multigenerational, tight-knit family.

The rent, $3,300 per month, wasn’t cheap, but by working long hours at her job as front-desk security at a downtown building, Martha found that she could afford it and even begin to save money. The landlord, Dorothy Banks, never made repairs, but Martha put up with that because Banks promised her that when she sold the home, Martha could buy it.

She continued to make the repairs that she could out-of-pocket to keep the home in good condition. At one point, the septic tank flooded the unit where Martha’s sister lived, rendering it uninhabitable. Martha’s sister suffered severe health problems before eventually moving out.

Despite all this, Martha continued to work to make the rent on time and put some money aside to buy the home one day. But that was before San Francisco’s real estate market exploded.

When Banks decided to sell, instead of accepting Simmons’ $650,000 offer on the home, she raised the rent, first to $4,700 a month, and then to $5,200, in the hopes she could drive the family out and sell the home empty, getting more money for it.

Simmons desperately scrambled to make ends meet, working three jobs, sleeping just a few hours a night and cashing out her 401(k) retirement fund to make the payments on time. But when even that wasn’t enough, she was faced with an eviction notice for nonpayment of rent.

She joined the community group Alliance of Californians for Community Empowerment (ACCE) to help organize rallies and protests, but nothing succeeded in getting Banks to reconsider.

Martha and her family finally realized their dream of home ownership, but only by moving to Oakley, on the outskirts of the Bay Area, more than 50 miles from their former home — joining the legions of working people and people of color who have been displaced from San Francisco and other California cities due to evictions and rising rents.

WHAT HAPPENED to Martha and her family is perfectly legal because of a state law called Costa-Hawkins that, among other restrictions, bans cities from covering single-family homes like hers under rent control laws.

Martha is supporting Prop 10, which would repeal Costa-Hawkins, in the hopes that others will be spared what she suffered. “It’s stressful to have one foot in the door and one foot in the street,” says Martha. “No human being should have to go through what we went through. It’s disturbing to know so many people are going through the same thing.”

Costa-Hawkins — named after its legislative sponsors Democratic state Sen. Jim Costa and Republican State Assembly member Phil Hawkins — was a bipartisan bill passed by the legislature over two decades ago at the behest of the real estate industry.

First, it bans cities from covering single-family homes or condos under rent control.

Second, it stops cities from applying permitted rent control regulations to so-called “new construction,” defined as any construction after the 1995 passage of the law. Cities that already have rent control are barred from extending rent control to buildings constructed after those cities passed their ordinances. In San Francisco, that date is 1979, and in Los Angeles, it is 1978.

Finally, it bans vacancy control — a type of rent control that restricts rent increases in between tenants. So when a tenant leaves, landlords can charge as much as they want to the next tenant.

In a city like San Francisco, a market with large numbers of higher-income earners, lack of rent control between tenancies creates a huge incentive for landlords to evict low-rent tenants so they can bring in tenants who can afford market rate. In San Francisco, that median rate was $3,253 per month for a one-bedroom apartment in 2017, according to the real estate website Curbed.

Lack of vacancy control, coupled with the ban on rent control for any so-called “new construction” means the sometimes gradual and sometimes rapid loss of affordable, rent-controlled apartments, with no way to replace them. That, combined with the decades-long, federal divestment from affordable and public housing, has amounted to a deliberate, bipartisan attack on our ability to solve the affordability crisis.

REPEAL OF Costa-Hawkins has long been at the top on the wish list for California tenants’ rights groups struggling to deal with the affordability crisis, but only recently have they considered it possible.

While cities like San Francisco have established nonprofit organizations advocating for tenants’ rights, in other cities, volunteer-led groups have more recently formed to organize tenants and press for local rent control ordinances.

Earlier this year, groups around the state organized to push for AB 1506, legislation that would have repealed Costa-Hawkins. The San Francisco-based Anti-Displacement Coalition organized protests to get State Assembly member David Chiu, one of the sponsors, to bring the bill to his committee for a vote.

When a committee hearing finally took place in January, hundreds of tenants and their supporters from around the state packed the room, along with landlords organized by real estate front groups.

When the Democratic-led Assembly failed to move the bill out of committee, Los Angeles-based groups such as the AIDS Healthcare Foundation, ACCE and the Eviction Defense Network began collecting signatures to put Proposition 10 on the ballot.

PROP 10 is facing an uphill battle. State and national real estate groups have spent a whopping $77.5 million on misleading ads to convince voters that the referendum is bad for renters.

A number of Wall Street developers and real estate investment trusts like Blackstone Group, its subsidiary Invitation Homes, Essex Property Trust, Equity Residential and AvalonBay Communities — with portfolios of tens of thousands of California homes between them — each donated $3 million or more to defeat the initiative.

These corporations know that passing statewide restrictions on local control is one of the most effective weapons in their arsenal — and that if Prop 10 passes, groups in the 32 states that currently have an outright ban on rent control could decide to take matters into their own hands and repeal those restrictions, too.

Passage of Prop 10 would be only the first step in winning the kind of rent control needed to address the affordability crisis. Once Costa-Hawkins restrictions are lifted, it will be up to cities to pass the rent control they need to address the crisis. Local political leaders of both parties rely on real estate developers and real estate industry groups for campaign donations, and industry lobbyists often enjoy almost unfettered access to politicians.

Working people and tenants will have to build movements with the necessary size, organization and social power to stop evictions, beat back rent increases and force politicians to pass local laws that prioritize them over real estate industry profits.

But voting “yes” on Proposition 10 is crucial to widening the scope for what victories are possible for the housing movement in the years to come.

Backstop, Mercury, Jupiter, and E-Front. This is a full-time, on-site position in the city of Richmond, Virginia working for a financial planning and investment...From Key Cyber Solutions - Fri, 27 Jul 2018 03:11:08 GMT - View all Richmond, VA jobs

The million-plus acres of pinelands sprawling across 56 communities and seven Garden State counties represent one of the most highly protected environments in the country. And for good reason.

Rare species of plants and animals flourish in its leafy expanses, trillions of gallons of water flow under its sandy soil.

But many of the state's environmental activists are rightfully concerned - if not downright angry - that the men and women charged with the stewardship of this precious jewel have bungled the job.

They're calling on Gov. Phil Murphy to overhaul the New Jersey Pinelands Commission, the 15-member panel charged with preserving, protecting and enhancing the natural and cultural resources of the vast parcel of land that more than 40 years ago became the country's first National Reserve.

At a fiery rally in front of the annex of the State Capitol last month, speaker after speaker urged Murphy to undo damage inflicted by his immediate predecessor, whose appointments to the commission often seemed driven less by protection than by politics.

What's your take on this? What would you like to see the governor do? Share your thoughts on our Facebook Page

Seven of the members are appointed by the governor. Seven others are named by freeholders from the counties within the Pinelands - Atlantic, Burlington, Camden, Cape May, Cumberland, Gloucester and Ocean - while one is appointed by the U.S. Secretary of the Interior.

Under the watch of former Gov. Chris Christie, the panel has repeatedly approved gas pipeline measures, such as the New Jersey Natural Gas Southern Reliability Link Pipeline, and has fallen short on protecting the supply of fresh water that lies beneath the Pinelands.

Moreover, rally speakers pointed out, individuals and environmental organizations have been repeatedly shut out of the decision-making process, leaving the commission without a full understanding of the facts before members make critical decisions.

Doug O'Malley, director of the watchdog organization Environment New Jersey, took aim at Christie's appointments, individuals he charged were part of a "clear attempt to undermine the Pinelands."

And he decried the firing of several long-term commissioners, among them Bob Jackson, the only African-American on the panel, whose only "crime" was to stand up for the ideals of preservation and the future of the Pines.

"Gov. Christie played Bridgegate-like politics with the Pinelands, and with Pinelands commissioners," O'Malley charged.

All seven of the commissioners chosen by the Republican governor are now serving under expired terms. They include its chairman, Sean W. Earlen, the mayor of Lumberton and a vice president of a Pennsylvania construction firm.

We're not sure why Murphy, with all his progressive zeal, has failed to put his own stamp on the commission 10 months into his tenure. But it's not too late.

The coalition that rallied in Trenton last month was so right. It's time for new leadership to assure that this remarkable gem within our borders remains an ecological wonder, and not a developers' playground.

Rumors of Samsung launching its much-rumored foldable phone alongside the Galaxy S10 lineup next year have been making rounds for some time, but it now seems like the Korean giant may give us an early look at its foldable phone during its Developer Conference, starting tomorrow. While we had expected it to be only a […]

Senior React Native Developer (Mobile App Developer)
Responsibilities:
Design, develop, implement and resolve issues relating to mobile developments.
Research and recommend cutting-edge tools, frameworks and techniques.
Collaborate with other team members to translate system architecture and requirements into well implemented software components.
Continue development of on-going project.
Make a weekly sprint and present an update every week
Senior Web Developer
Responsibilities:
In charge of development of new software according to business requirement.
Work with the web development team in providing solutions to bugs and issues.
Continue development of on-going project.
Make a weekly sprint and present an update every week.
Keeping up to date with latest technologies with regards to web development.

Chip and component developer CST Global Ltd. has commissioned its second metal-organic chemical vapor deposition (MOCVD) reactor in collaboration with the University of Glasgow (UG). While the reactor ... - Source: www.photonics.com

Temple Run 2 is a horror game. Recently, thats been sort of obvious. For October, developer Imangis endless runner was decked out in Halloween accouterments. Jack-o-lanterns shine with glowing, yellow smiles as you sprint past. []

Swedish indie developer Lionbite Games released the first gameplay trailer for its dystopian adventure Rain of Reflections today. Using only in-engine footage, the trailer showcases the game's sleek, dystopian aesthetic while also introducing the core gameplay systems.

Along with voice-acted dialogue systems where players must weigh their decisions carefully, the game also incorporates puzzle solving minigames, turn-based strategy with a wide array of tactical options, and point-and-click exploration.

With the announcement of three chapters, the trailer sets up the events of the game’s first chapter, “Set Free,” which focuses on scientist Wilona, who works to uncover the mysteries behind humanity’s sudden infertility. When she begins to question the morality of her experiments, Wilona frees her test subject – the last born child, and must navigate opposing forces who seek to stop her.

The following two chapters will be played from the perspectives of private detective Dwennon West in a "cyberpunk-esque city district" and resistance figure Imra, respectively.

With no specific release date, the first chapter is coming to PC in the coming months.

Ubisoft has announced the first details of Rainbow Six Siege's fourth season, Operation Wind Bastion, including two new operators and a new map as players journey to Morocco, and a full reveal later this month.

"You’ll have the rare opportunity to test your skills inside a stunning mudbrick kasbah," the post reads. "Enjoy unprecedented roof access, but do try to stay focused despite the breathtaking oasis just next door."

In addition, there's two new operators for players to play around with. The first, a defender, is a male character stationed within the Moroccan Special Forces. Described as "immovable as the mountains," the new operator emphasizes the ability to hold your spot for as long as you possibly can.

The second operator is a female attacker who specializes in the ability to use her environment to traverse while on the offensive. "Perceptive and resourceful," the developers write, "she’s an expert on environmental operations with a knack for pushing the enemy back."

A full reveal for the new map and the two operators will take place on the Rainbow Six Siege Twitch channel on November 17, taking place during the Pro League Finals for the game.

Digital Extremes has been teasing its newest Warframe expansion, Fortuna, for quite a while now. After going open world with Plains of Eidolon last year, Fortuna continues with the new structure and a host of other new features, and the developer is finally narrowing that down. Warframe fans can finally get their hands on the latest expansion this week on PC.

Check out the release trailer for the expansion released today.

Fortuna adds ridiculously speedy hoverboard technology called the K-Drive, a new faction called the Vent Kids, revamped mechanics like fishing, and way more. All of this comes to Warframe completely free, though it is only coming to PC for right now. Fortuna will come to PlayStation 4 and Xbox One sometime in the future.

Warframe is also coming to the Nintendo Switch, which is also coming out relatively soon. You'll be taking Warframe on the go on November 20, but you can skirt around the open worlds of Fortuna later this week.

The ESRB has also just added a listing for a PC port the game, lending more credence to the possibility of seeing it on PC sometime in the near future, likely under Microsoft's expanded Game Pass. Obviously none of this is an official announcement but this is about as solid a bet you can make that we'll see Sunset Overdrive on PC soon.

Action RPG fans have been long singing the praises of Grinding Gear Games' Path of Exile, the free-to-play loot-based game with strong Diablo influences. The game was released in 2013 on PC to surprising popularity and multiple expansions before launching on Xbox One in August 2017. Now, over a year since the first console port, the developers are bringing the game over to PlayStation 4 for the first time.

Alongside the 3.5.0 expansion coming to PC and Xbox One in December, the PlayStation 4 version is also set to launch the same month. While the expansion has a set date of December 7, the PS4 version will launch during a vaguer time period sometime during December.

Check out the PS4 release trailer below.

Grinding Gear pride themselves on the payment model, which they describe as "ethical microtransactions," primarily focused on cosmetics while letting players create and use as many accounts as they want. The developer has also ruled out crossplay, as the console versions play on different servers.

Path of Exile will also be getting private leagues soon, setting up custom rulesets for the world for a fee or microtransaction points.

Football Manager 2019 is out on PC and mobile, but if you're a fan of the franchise you obviously don't need me to tell you that. You've probably already made Krasnodar the biggest club in the world – or maybe you're still tinkering with your training schedule in the pre-season. But as a neophyte to the series and all its complexities I'm here to tell everyone that there is room for all kinds of players in Football Manager 2019.

It's easy for your eyes to glaze over when you're not a hardcore fan looking at a yearly sports title's annual feature sheet. You haven't been playing each installment from front to back anyway, so why should this year catch your eye any more than the others? Football Manager 2019 does well to bridge this divide and give something to fans of all stripes through its fundamental design.

I'm not a huge fan of the game's menu colors or layout, but overall the title does a good job executing one of its basic charges: Giving players the information and tools they need to succeed, and then letting them take control of their destiny. It's not just through the new, optional induction tutorial system. Giving unsure or uninterested managers the choice to delegate whole sections of the game to their staff offers the flexibility necessary to offload parts of this large title without sacrificing its scope and complexity.

Through the weeks controlling my club, I'd check in with my training staff, for example, to see how things were going even though I'd opted out of setting up the new daily schedule. In this instance, I'd give specific players rest so they didn't get injured or I'd add a new tactical formation to train or start breaking in a player at a new position. It's also a handy tool to simply see who's doing well in the leadup to a match and who may warrant inclusion or may be dropped. I can always layer on more responsibilities or drop others at any point without having to start a new save or worry that the ship isn't humming along in the background.

Similarly, I liked signing off on transfer decisions and setting the overall scouting direction without beating the bushes myself or micromanaging the scouts themselves. In this way the game actively keeps you in the loop and lets you make the big decisions without bogging you down in minutia if that's not what you want.

The game's flexibility helps me focus on one of the larger tasks at hand – getting results on the pitch – a place where FM 2019 requires that you stand on your own two managerial feet. What tactics should I use when we're in possession of the ball versus when we're out of it (another new feature)? What's the best eleven I can field for this particular match? Who should I put on as a sub and when? Your previous decisions in training, man management, and transfers lead you to these match-related points, of course, but the off-the-pitch business of running a club doesn't overshadow what happens on it. Instead, the two combine nicely. I'm not (yet) a world class manager by any stretch, but I feel it's within my grasp thanks to the tools at my disposal.

All that being said, I do think the series could make a few additions that I think would not only help newer players but vets could also find interesting.

A.I. Sliders – Many sports games use sliders, allowing players to tweak specific components of the game's A.I. Players could use this at their discretion to tweak everything from how aggressive other teams are in the transfer market to the board's tolerance for failure.

Challenges/Scenarios – A lot of players have created their own FM challenges through the years, such as taking a team from the lower leagues all the way to the top or replicating the career of Sir Alex Ferguson. This would take that a step further by incorporating them directly into the game (including leaderboards) as a way to let players experience gameplay in pre-set conditions of varying length, difficulty, and imagination.

I understand Football Manager developer Sports Interactive has considered these through the years and has its own reasons and philosophy toward them, but I think they're worth bringing them up for a title that already is about letting you twiddle a plethora of knobs on your franchise experience.

Original Story:
Last week, Blizzard's keynote speech at BlizzCon ended with Diablo Immortal, a mobile Diablo game made in collaboration with Chinese mobile developer NetEase. The announcement stung certain fans, many of whom got extremely upset at the new game, partly because it is the only announced Diablo project on the horizon. According to a report from Kotaku, however, Blizzard was debating whether to mention Diablo 4 is in development at the same time and ultimately decided against it.

According to Kotaku's sources, the original intention was to announce Diablo Immortal, then end on a video of Blizzard co-founder Allen Adham confirming Diablo 4's existence and saying that the game is still development and not yet ready to show. Instead, the show ended with the Immortal announcement, concluding the conference on somewhat of a low note for fans in attendance as they sat silently waiting for a "One More Thing" coda.

The timeline matches with a blog post from Blizzard a few weeks ago, where the company tried to clamp down on expectations by more or less saying Diablo 4 is coming, but won't be at the show in any form. It is possible that Blizzard thought that announcement served the same purpose as the video, but it hard to speculate on their reasoning for choosing not to announce Diablo 4.

The internet has produced an intense backlash to Diablo Immortal's announcement, which Blizzard confesses they did not expect, with some of the more vocal detractors openly wishing for the game to get cancelled. It is unknown if Blizzard announcing Diablo 4 at the same time would have mitigated the response, but companies have done similar things before. After the poor reaction to Metroid Prime: Federation Force, and the presumed death of classic Metroid, Nintendo seemingly tried to get ahead of the Metroid: Samus Returns announcement by showing Metroid Prime 4 first. Even though it was just a logo and title, fans were satiated by the Switch game announcement and did not feel burned by the Metroid 2 remake on 3DS.

Update:
In a follow-up comment made to Kotaku, Blizzard says it didn't pull anything from its show. "We generally don’t comment on rumors or speculation, but we can say that we didn’t pull any announcements from BlizzCon this year or have plans for other announcements," a Blizzard representative wrote. The developer didn't comment on whether or not Diablo 4 is indeed in the works, but did say that its studio is working on multiple "unannounced Diablo projects."

I don't envy the people that made that decision today. I can understand the desire to keep from announcing something still in flux, but a logo would have probably been fine, or even just acknowledgment. Whether it's for better or worse, fans feel a sense of ownership of series they love, and it has to be acknowledged from a marketing perspective even if you don't agree with it.

One of Ubisoft's prospective plans for Assassin's Creed Odyssey was the introduction of Mercenary occurrences, occasional live events that added epic mercenaries into Assassin's Creed Odyssey for the player to meet and kill. The idea was similar in function to Hitman's elusive targets, time-limited assassination targets that encouraged players to re-engage with the game during set times. For Ubisoft's game, however, a problem emerged when the last two mercenary events were cancelled on the eve of going live. Now the developer says that they're putting a freeze on the entire idea while they work out the bugs.

"Two weeks ago, when we attempted to launch the first Epic Mercenary Event with Damais the Indifferent, we discovered that the content didn’t properly appear for a majority of our players," a Ubisoft community manager explained in a forum post. "It wasn’t satisfactory to have an event available to only a portion of our players, so we decided to temporarily remove the Epic Mercenary Events from the game altogether until the issue is resolved. We are working on a fix and are hoping to introduce them later this month. We will provide an updated ETA as soon as possible."

The first event was cancelled October 18, with a promise to renew the hunting in two weeks. On October 30, Ubisoft tweeted an announcement that it would not be happening then, either. This week, they officially put it on ice, though the forum post says they will be renewing it again later this month, hopefully with more fruitful results than before.

Remember Damais the Indifferent? So far we've had no success in tracking him down, so, unfortunately, there won't be a live event this week.

We're working towards a solution to address an issue that prevents the mercenary live events from appearing as soon as we can. Stay tuned. pic.twitter.com/BHqEp1g3cb

In the meantime, Ubisoft is focusing on ship events, which are similar to the mercenary ones but with ship battles. Fans have suggested that there is a similar problem to the mercenary one there, as well, but it does not seem like Ubisoft is seeing the same degree of error there.

OMEMO is, like any other encryption protocol based on trust. The user has to make sure, that the keys they are trusting belong to the right users. Otherwise a so called Man-in-the-Middle attack is possible. An attacker might pretend to be your contact and secretly steal all the messages you thought were encrypted. They are, just to the wrong recipient.

To counteract such attacks, OMEMO encourages the user to verify their contacts fingerprints. A fingerprint can be considered the name of a contacts key. The user has to make sure, that the key A he is presented with really belongs to contact C by checking, if the fingerprints match. As those fingerprints are usually long, random series of characters, this is a tedious task. Luckily there are techniques like QR codes, which make our lifes easier. Instead of comparing two long strings character by character, you just scan a code and are done.

The QR-Code contains the Jabber-ID of the owner, as well as all of their fingerprints. So by scanning your code, a friend might automatically add you to their contact list and mark your devices as trusted. You can also put the QR-Code on your personal website, so people who want to reach out to you can easily establish a secure connection to you.

I spent the last few days looking into JavaFX (I have no real UI designing experience in Java, apart from Android), designing a small tool that can generate OMEMO fingerprint QR-Codes. This is what I came up with:

QR-Code generator with selectable fingerprints

The tool logs into your XMPP account and fetches all your published keys. Then it presents you with a list in which you can select, which fingerprints you want to include in the QR-Code.

There are still a lot of features missing and I consider the tool in no means as completed. My plans are to add the possibility to export QR-Codes to file, as well as to copy the text content of the code to clipboard. You see, there is a lot of work left to do, however I wanted to share my thoughts with you, so maybe client developers can adopt my idea.

Samsung updated some of its social media channels today with a new profile picture that shows the otherwise straight Samsung logo, now folded over instead. The timing coincides with the Samsung Developer Conference in San Francisco in two days, where it’s expected that Samsung will give the world a preview of the “Samsung Galaxy F” foldable tablet phone, or what else it’s name might be. The model number of the … more...

THE developer responsible for millions of plastic shards that washed up near Dún Laoghaire baths will have to shoulder the costs of cleaning up the mess, the local authority has said. A massive clean-up operation has been under way since Friday, when...

When I step back and think about Flickr, it’s most important contribution to the world of APIs was all about the resources it made available. Flickr was the original image sharing API, powering the growing blogosphere at the beginning of this century. Flickr gave us a simple interface for humans in 2004, and an API for other applications just six months later, that provided us all with a place to upload the images we would be using across our storytelling on our blogs. Providing the API resources that we would be needed to power the next decade of storytelling via our blogs, but also set into the motion the social evolution of the web, demonstrating that images were an essential building block of doing business on the web, and in just a couple of years, on the new mobile devices that would become ubiquitous in our lives.

Flickr was an important API resource, because it provided access to an important resource–our images. The API allowed you to share these meaningful resources on your blog, via Facebook and Twitter, and anywhere else you wanted. In 2005, this was huge. At the time, I was working to make a shift from being an developer lead, to playing around with side businesses built using the different resources that were becoming available online via simple web APIs. Flickr quickly became a central actor in my digital resource toolbox, and I was using it regularly in my work. As an essential application, Flickr quickly got out of my way by offering an API. I would still use the Flickr interface, but increasingly I was just publishing images to Flickr via the API, and embedding them in blogs, and other marketing, becoming what we began to call social media marketing, and eventually was something that I would rebrand as API Evangelist while making it more about the tooling I was using, than the task I was accomplishing.

After thinking about Flickr as a core API resource, next I always think about the stories I’ve told about Flickr’s Caterina Fake who coined the phase, “business development 2.0”. As I tell it, back in the early days of Flickr, the team was getting a lot of interest in the product, and unable to respond to all emails and phone calls. They simply told people to build on their API, and if they were doing something interesting, they would know, because they had the API usage data. Flickr was going beyond the tech and using an API to help raise the bar for business development partnerships, putting the burden on the integrator to do the heavy lifting, write the code, and even build the user base, before you’d get the attention of the platform. If you were building something interesting, and getting the attention of users, the Flickr team would be aware of it because of their API management tooling, and they would reach out to you to arrange some sort of partner relationship.

It makes for a good story. It resonates with business people. It speaks to the power of doing APIs. It is also enjoys a position which omits so many other negative aspects of doing startups, which as a technologist becomes too easy to look the other way when you are just focused on the tech, and as a business leader after the venture capital money begins flowing. Business development 2.0 has a wonderful libertarian, pull yourself up by your bootstrap ring to it. You make valuable resources available, and smart developers will come along and innovate! Do amazing things you never thought of! If you build it, they will come. Which all feeds right into the sharecropping, and exploitation that occurs within ecosystems, leading to less than ethical API providers poaching ideas, and thinking that it is ok to push public developers to work for free on their farm. Resulting in many startups seeing APIs as simply a free labor pool, and source of free road map ideas, manifesting concepts like the “cloud kill zone”. Business development 2.0 baby!!

Another dimension of this illness we like to omit is around the lack of a business model. I mean, the shit is free! Why would we complain about free storage for all our images, with a free API? It is easier for us to overlook the anti-competitive approaches to pricing, and complain down the road when each acquisition of the real product (Flickr) occurs, than it is to resist companies who lack a consumer level business model, simply because we are all the product. Flickr, Twitter, Facebook, Gmail, and other tools we depend on are all free for a reason. Because they are market creating services, and revenue is being generated at other levels out of our view as consumers, or API developers. We are just working on Maggie’s Farm, and her pa is reaping all the benefit. When it come’s to Flickr, Maggie and her {a cashed out a long time ago, and the farm keeps getting sold and resold, all while we still keep working away in the soil, giving them our digital bits that we’ve cultivate there, until conditions finally become unacceptable enough to run us off.

I’ve begun moving off of Flickr a couple years ago. I stopped using them for blog photo hosting in 2010. I stopped uploading photos there regularly over the last couple years. The latest crackdown doesn’t mean much to me. It will impact my storytelling to potentially lose such an amazing resource of openly licensed photos. However, I’ve saved each photo I use, and it’s attribution locally–hopefully my attribution link doesn’t begin to 404 at some point. Hopefully other openly licensed photo collections emerge on the horizon, and ideally SmugMug doesn’t do away with openly licensed treasure trove they are stewards of now. The latest acquisition and business model shift occurring across the Flickr platform doesn’t hit me too hard, but the situation does give me an opportunity to step back and reassess my API storytelling, and the role that Flickr plays in my API Evangelist narrative. Giving me another opportunity to eliminate bullshit and harmful myths from my storytelling and myth making–which I feel like is getting pretty close to leaving me with nothing left to tell when it comes to APIs.

In the end, if I just focus purely on the tech, and ignore the business and politics of APIs, I can keep telling these bullshit. This is the real Flickr lesson for me. I’d say there is two reasons we perpetuate stories like this. One, “because we just didn’t know any better”. Which is pretty weak. Two, it is how capitalism works. It is why us dudes, especially us white dudes thrive so well in a Silicon Valley tech libertarian world, because this type of myth making benefits us, even when it repeatedly sets us up for failure. This is one of the things that makes me throw up a little (a lot) in my mouth when I think about the API Evangelist persona I’ve created. This entire reality makes it difficult for me to keep doing this API Evangelist theater each day. APIs are cool and all, but when they are wielded as part of this larger money driven stream of consciousness, we (individuals) are always going to lose. In the end, why the fuck do I want to be a mouthpiece for this kind of exploitation. I don’t.

On this newest episode of The New Stack Makers podcast, TNS founder Alex Williams is joined by Stackery CEO Nate Taggart, at ServerlessConf San Francisco, to discuss the makings of Stackery, and how it has standardized on top of the AWS Serverless Application Model to the benefit of both its developers and enterprise customers. Prior […]

Augmenting its own growing practice in cloud native computing, VMware is in the process of acquiring Heptio, an enterprise-focused consulting firm founded by two of the initial developers of the open source Kubernetes container orchestration system. VMware announced the pending purchase at the company’s VMworld 2018 Europe conference Wednesday. Terms of the deal were not disclosed. VMware […]

My wife and I travel a fair amount around New England, and I've been increasingly troubled by the fact that so many locales seem more interesting than my own. This has long baffled me, since a town possessing as much wealth, creativity and artistic sensibility as Westport has for many years should be one of the region's jewels.
Mr. Waldman seems to be telling us that Westport actually IS such a place and, if I'm reading him correctly, he's blaming its increasingly prosaic reputation on our own collective bad attitude. And while he's right that the problem does become circular to a degree, he's wrong to be suggesting that to fix things all we have to do is get our minds right. He provides a long list of the complaints people have and then pretty much dismisses them all.
I've thought for a long time now about what it is that other New England towns have that we lack. I've concluded that the unfavorable contrast boils down mainly to the differing ways in which places have integrated their own histories into their growth. Westport has become a tear-down culture and it shows in both our residential and commercial development. "Charm" is an over-used word but I suppose it captures as well as any at least one dimension of what I'm getting at here. And charm, by its nature, has to emerge organically over time. It can't be imposed quickly by large developers and top-down planners, no matter how good they are or how talented the architects they hire to design new structures.
God bless the people who have now apparently rescued the Black Duck, and thanks to Dan for publicizing this situation. This is just one example though, and the place is still seemingly a doomed anachronism. It's nothing but a funky little dive, but people feel good about themselves and about one another there. Other restaurants, bars, stores and entertainment venues attract their own unique clienteles, some upscale, some downscale, and the interaction among all these diverse groupings is what creates a vibrant local culture.
You could shoot a cannonball down Main Street after dark and not hit anybody or probably even wake many people up.
I can't offer any grand solution for the problem I'm talking about here since grand solutions can only aggravate it. I would just ask, as others have, that own town leaders be more sensitive to protecting what we have left before it's too late. They should see their jobs as creating a political infrastructure that's conducive to organic growth in the years ahead.

The Manoel Island Foundation has raised no objections to new plans for higher buildings on the island, noting that the overall building volume will remain unchanged.The foundation includes representatives of the Gżira local council, NGOs and developers Midi, and was set up to oversee a guardianship agreement signed between the stakeholders earlier this year.In a statement on Tuesday, the foundation said: “The amendments as proposed constitute a transfer of the residential volumes from the north clusters to the south clusters and the overall development remains a low rise development which does not exceed four floors across the island. Furthermore, the proposed amendments do not contemplate any change to the building volumes as originally proposed.”The foundation reiterated its commitment to ensuring that Midi adhered to the obligations in the guardianship deed.
The proposed amendments to Midi’s mega-development masterplan will increase the height of planned apartment blocks on the landward side of Manoel Island to four storeys, or 18.5 metres, instead of the three storeys (14.5 metres) in the original masterplan submitted last year.In exchange, the new plans eliminate some of...

The amount of revenue generated for the Planning Authority from a parking scheme has grown sevenfold in just five years, with this year’s figure already topping €3 million.
The Commuted Parking Payment Scheme levies an amount on developers who cannot offer on-site parking, and was originally designed to raise money to be used to provide public parking to cope with the additional demand.
Replying to a parliamentary question posed by MP Jason Azzopardi, Transport Minister Ian Borg said that in 2013, the scheme raised €475,000 but, by 2017, the amount had gone up to €3.4 million.
The minister noted that restaurants with outdoor tables and chairs did not have to pay into the scheme, unless they took up existing parking spaces. Since April 2017, a planning gain would have to be paid according to the number of spaces taken up.
The CPPS fees were raised in June 2018, when a three-tier rate system was introduced whereby a one car space not provided for on-site would cost the developer €2,500.
From the third to the ninth car space not provided for on site, the developer must contribute €6,000 per car space. From the tenth car space upwards, a €10,000 contribution per car space is imposed.

Job Description Solutions Architect/Senior Developer to work on bespoke customer specific development projects based on market leading technologies within a busy custom solutions development team. Building key relationships with SI Practice, Professional Services and Sales teams internally, as well as our valued customers externally. Main Responsibilities Undertake development projects requiring the use of market leading technologies: HTML, Javascript, various leading frameworks, ASP.NET, C#, Styling Languages, Developer tools Undertake the installation, configuration and development requirements of web development projects dependant on demand and skills Effective and regular communication with clients and management Work on client sites, Advanced offices and home Gathering requirements and writing proposals User and system documentation Project management Qualifications • Bachelor Degree in Computing or related subject or relevant commercial experience • IT qualifications (advantageous) • Strong communication skills, written and spoken • Well-presented and good interpersonal skills Experience Essential • Web development - HTML, HTML5, Javascript, AJAX, CSS etc. • At least 5 years .NET/C# development inc Visual Studio • Microsoft SQL Server and related market leading technologies • Relevant experience in a similar role Preferred • Microsoft Sharepoint • SQL Server - SSIS, SSRS Relevant • Mobile development - Xamarin, OSX, iOS, Android, Mobile Frameworks Phonegap, JQuery Mobile • IDEs - Eclipse, XCode • Databases - Oracle, MySQL, PostgreSQL • Application Servers - JBOSS, Tomcat, IIS • XML – XML schema, XQuery, XSLT, XPath • Build - SVN, CVS, Mercurial, ANT, GitHub, Competencies • ‘Can-do’ pro-active attitude • Attention to detail – focus on quality • Able to analyse business requirements effectively • Creativity and Innovation in designing solutions • Teamwork – able to work effectively as part of a team or independently • Coaching – self-development. Willing and able to learn new skills • Drive for Achievement – keen to progress • Initiative - project ownership • Resilience • Customer Service Orientation Join the Team Some of our Key Benefits are: Excellent benefits from day one: 25 days holidays + opportunity to buy or sell up to 5 days contributory pension life insurance income protection insurance childcare voucher salary sacrifice cycle to work schem, employee assistance programme Special focus on training and development with the opportunity to excel your career from our internal Talent Development Team Be part of an organisation that has recently been ranked by Deloitte in the Top 50 fastest growing Tech Companies Advanced are an equal opportunity employer, committed to removing bias from the hiring process. We hire for potential and develop at pace. If successful, you can expect to take an online assessment, meet the HR team and attend a final interview. Click apply now, and a member of our in-house talent acquisition team will be in touch!

Song Do, the new designed city of the future is having teething problems. How long will it last as the sea level rises and reclaims the marshland the city took over? But it also has a very dark side.
<i>LOOKING wistfully around at the surroundings, a strange mix of marshland and random high-rise buildings, Shim Jong-rae shakes his head, echoing the sentiment of many residents: “It’s a ghost town.”
For more than a decade, urban planners have been studying the construction of Songdo, South Korea, the world’s first Smart City. Built within 25 miles of Seoul, it was to be the antithesis of the suffocating, overpopulated capital. A new way of thinking for more than 300,000 residents, spread out over 600 hectares of reclaimed land from the Yellow Sea.
The brainchild of developers and the government, the vision was to construct a car-free world, with 40 per cent green space and dozens of kilometres of cycling routes.
<i></i>
https://www.scmp.com/week-asia/business/article/2137838/south-koreas-smart-city-songdo-not-quite-smart-enough
But even worse, Songdo represents ecohabitat destruction.
</i><i> Reclamation of internationally important tidal-flats at Song Do, as elsewhere, however, has already resulted in substantial declines in some species of waterbird at the local, national and even population level. Some of the species in decline are globally-threatened. These declines will continue further with further reclamation
Simply, a genuine eco-city would not be built on tidal-flats. A genuine eco-city would cherish remaining tidal-flats, and aim to restore and enhance intertidal wetland – and not permit or promote its destruction. Unlike the ROK and China, most nations already gave up large-scale tidal-flat reclamation several decades ago. Now, the United States of America, the UK, the Netherlands, Germany and Italy among a host of other nations, are instead actively investing in restoring coastal wetlands and tidal-flats. Restoration, while expensive and less efficient than conservation of natural intertidal wetland, still helps these nations to move closer to meeting their formal obligations to the conservation conventions, and to help conserve migratory bird and fish species that depend on tidal-flats. It also allows them (and future generations of their citizens), to reap the multiple environmental, social and economic benefits of conservation through restoring ecosystem health and services.
Is this really the most economical, efficient and environmentally-sound use of land that until a few years ago was some of the most naturally productive and wildlife-rich wetland in the world?<i></i>
http://www.birdskorea.org/Habitats/Wetlands/Songdo/BK-HA-Songdo-Is-Song-Do-an-Example-of-Sustainable-Development.shtml
Anyone see a triple whammy here?</i>

TX-Dallas, Are you a strong Technical Recruiter looking for a new recruiting engagement? Are you open to the Grapevine/Southlake, TX area? Do you have success in sourcing and placing Software Developers, QA Engineers, BI Developers, Data Analysts/Scientists? If so, let's connect! The position is long term with possible full time opportunities down the road depending on level of performance. Our Client’s offi

TX-Irving, .NET Developer/Perm/Irving/Los Colinas, TX/70k-85k .NET DEVELOPER- PERM- IRVING/ LAS COLINAS, TX The end client is unable to sponsor or transfer visas for this position; all parties authorized to work in the US without sponsorship are encouraged to apply. Our client located in Irving/Las Colinas, TX is currently seeking a .NET Developer to join their team. Our client partners with companies to sup

TX-Dallas, Job Description: Mastech Digital provides digital and mainstream technology staff as well as Digital Transformation Services for leading American Corporations. We are currently seeking a Ruby on Rails Developer for our client in the Information Technology domain. We value our professionals, providing comprehensive benefits, exciting challenges, and the opportunity for growth. This is a Contract po

TX-Irving, Hi, Hope you are doing good!! This is Hanumesh from Reliable Software. We have a position with one of our client and I think you could be a good fit for the role. Below are a few details pertaining to the job. Please take a look at it and let me know if you would like to be considered for the opportunity. Please share with me your updated resume. Role: Senior Salesforce Administrator/Developer Loc

TX-Irving, Our client is currently seeking a Full Stack Java Developer Full-time permanent position in Irving, Texas Great benefits, 401k, pension Job Description: •Seeking lead web services Java software engineer 5+ years of experience. Proficient in full-stack web development using front-end frameworks (ReactJS or AngularJS) and backend frameworks Java (Spring), JavaScript (Node) as well as HTML5 and CSS3.

MA-Boston, Boston based high tech firm seeking two back end Java Developers/Software Engineers for their growing team! Immediate need for Java focused Engineers to write custom cloud applications and 3rd party web API's on a SaaS based flagship product. Applicants should be comfortable implementing and leveraging web APIs in Java. As a Back End Focused Software Engineer, you will have the opportunity to writ

NY-WILLIAMSVILLE, The Website Application Developer will be responsible for programming functions of the public and intranet websites for the company. Works with the Web Development Manager, Web Application Developers and UX Developers to produce functional website applications. Primary Responsibilities include: • Developing and testing applications for the company’s websites • Integration of the company’s websites

TX-Grapevine, SUMMARY The International Database Administrator is responsible for feature design and development, maintenance, and production support of all corporate data management solutions for GameStop, Inc. International department. This position is also responsible for supporting multiple third party databases. RESPONSIBILITIES * Collaborates with other Information Technology associates and functional bus

While the council planners and their political allies run around in a minor panic trying to square the student accommodation circle, a feat which will almost by definition defeat them, the potential developers of our cultural quarter have retired to completely revise their drawings, because of the sheer weight of what is known as "material planning objections" together with a visceral set of objections from more than 600 ordinary members of the public.

Powerful developers just took control of properties they needed to build long-in-coming mega-projects on two different blocks between Fifth and Sixth avenues. Gary Barnett’s Extell Development Company bought 4-story 32 W. 48th St., former home to the Plaza Arcade diamond mini-mall, for $40 million, and at the same time closed on title to several adjacent...

Timeport. 2.7.2

- Private photo and video galleries.- All your accounts and strong passwords.- FOLDERS.- Cards.- Real documents.- Notes, ideas.If you liked our app, please write a review, or send some offers by: info@unclesoft.comThank you :)Have you ever forgotten your password, the serial number of your passport, or the pin code for your credit card? In today's Internet space, we are constantly confronted with different forms to fill out and passwords to come up with and remember.Try the free Timeport app now!It has an exceptionally original design. It is reliable, safe and High Tech. There is nothing like it! Timeport also features sound effect buttons and pleasant background music which can be easily disabled if desired.Important! In addition Timeport, unlike other similar applications, does not have a server, which means that developers do not get and do not collect personal information about you and your family.The application works without Internet access; all data is stored on your device.Your vital information is protected by Timeport.▪ Encrypts all your data using authenticated AES 256-bit encryption.▪ Password Generator creates a complex and unique Timeport password.▪ Keep the number and pin code of your credit card by assigning the status of your card.▪ Use original and realistic templates of documents (passport, driver license and many others…)▪ Write secret notes, ideas, thoughts, other writing.▪ Send documents to print directly from your device.▪ Do or transfer photos and videos must be accessible only place for you.▪ Edit and share your documents and cards via SMS, iMessage, E-Mail, Air Drop▪ Safe and convenient entry into the application with Touch ID▪ Synchronization with other devices via iCloud ▪ Automatic locking program▪ Notification Center will always remind you about the duration of validity of your documents and cards

The SKIP system for ordering kanji was developed by Jack Halpern (www.kanji.org), and is used with his permission.KanjiVG (animations) copyright Ulrich Apel.

I'm Justin. I moved to Japan in 2002 and I'm married to my Japanese wife (lovingly called "Spellchecker"). I've taken the JLPT tests up to N1 (yes, I was there) using this app. I speak, read and write Japanese fluently and I'm an overactive developer who can't sit still.

What's New

- Some iCloud syncing issues fixed.- iPhone XS/XR ready.

Apologies for two quick updates in a row. I just had to get these iCloud fixes out there.

Project Highrise 1.0.9

Unleash your inner architect as the PC mega-hit arrives on iPad! Playing as both architect and developer, your job is to build world-famous skyscrapers that will be the envy of the entire city. Manage every aspect of your building from construction through to keeping your tenants happy. Success is entirely in your hands.

PLAY THE WAY YOU WANT Will you create an exclusive office highrise that attracts business leaders from around the world? Will you construct luxury apartments in the sky, penthouses for the elite and playgrounds for the famous? The choice is yours.

MAKE ALL THE DECISIONSAs a savvy developer, you must keep an eye on the bottom line and invest in the future. Succeed, and you will reap the rewards of a prestigious address where everyone will clamour to live and work. Fail, and you will watch tenants leave in disgust, taking their business elsewhere and leaving your reputation in tatters.

FEATURES

- Deep and complex simulation of a modern skyscraper.- Huge variety of tenants with their own unique characteristics- Open sandbox play with several difficulty levels and starting conditions allowing you to build your dream skyscraper - Campaign mode that tests your skill at building a successful highrise in challenging scenarios.- Test your management mettle by keeping up with your buildings diverse population and their ever-increasing demands.- Hire specialized consultants to increase your building's curb appeal, operational efficiency, and pull with city hall.

NOTE• Parts of this game require an active Internet connection.

SUPPORT

Problems & Questions:Visit www.kalypsomediamobile.com or write us an e-mail at supportmobile@kalypsomedia.com

OH-Cincinnati, Looking for a Strong Java Developer Job Title: Java Developer Location: Cincinnati OH Duration: 12 Months Must be able to work as a W2 hourly consultant Required Skills: Application development using one or more of the following: Java, J2EE, and/or Enterprise Java solutions Java frameworks, including Spring, JSF and Hibernate Experience in back end technologies Hibernate, struts, SOAP, REST and AP

The headline feature for 0.4.10 would have to be ReactOS' ability to now boot from a BTRFS formatted drive. The work enabling this was part of this year's Google Summer of Code with student developer Victor Perevertkin. While the actual filesystem driver itself is from the WinBtrfs project by Mark Harmstone, much of Victor's work was in filling out the bits and pieces of ReactOS that the driver expected to interact with. The filesystem stack in ReactOS is arguably one of the less mature components by simple dint of there being so few open source NT filesystem drivers to test against. Those that the project uses internally have all gone through enough iterations that gaps in ReactOS are worked around. WinBtrfs on the other hand came with no such baggage to its history and instead made full use of the documented NT filesystem driver API.

Seems like another solid release. While ReactOS always feels a bit like chasing an unobtainable goal, I'm still incredibly impressed by their work, and at this point, it does seem like it can serve quite a few basic needs through running actual Win32 applications.

TX-Houston, A customer in South Houston is looking for a Tableau BI Developer with SQL experience. This is for a lead role and will be developing tableau into power BI. The group consists of a DBA and a Jr Developer. This will be the resource that is going to lead the entire group. We need more of a lead developer for this group. They are taking the data from the SCADA application into a custom tableau dashbo

If you have ever tried to create a WPF application which has a larger memory footprint (>500MB) you will notice some random hangs in the UI which become several second hangs for no apparent reason. On a test machine with a decent Xeon CPU I have seen 35s UI hangs because WPF frequently (up to every 850ms) calls GC.Collect(2). The root cause of the problem is that WPF was designed with a business application developer in mind who never gets resource management right. For that reason WPF Bitmaps and other things do not even care to implement the IDisposable interface to clean up resources deterministically. Instead the cleanup is left as an exercise for the Garbage Collector with the Finalizer thread working hand in hand.

That can lead to problems. Suppose a 32 bit application where the user is scrolling through a virtual ListView with many bitmaps inside it. This operation will cause the allocation of many temporary Bitmaps which will quickly become garbage. Because the Bitmaps are small objects on the managed heap but the actual Bitmap data is stored in unmanaged memory the Garbage Collector sees no need to clean things up for a long time. In effect it did happen that your application ran of unmanaged memory long before the Garbage Collector was able to release the bitmaps in the Finalizer. That lead to one of the worst hacks in WPF. It is called MemoryPressure. Lets have a look how it is implemented:

//// About the thresholds: // For the inter-allocation threshold 850ms is the longest time between allocations on a high-end// machine for an image application loading many large (several M pixel) images continuously.// This falls well below user-interaction time (which is on the order of several seconds) so it// differentiates nicely between the two//// The initial threshold of 1MB is so we don't force GCs when the total amount of unmanaged memory// isn't a big deal. The point of this code is to stop unmanaged memory from spiraling out of control// at that point it's typically in the 10s of MBs. This threshold thus could potentially be increased// but current testing shows it is adequate.//// The max time between collections was set to 30 sec because that is a 'long time' - this is// for the case where allocations (and frees) of images are happening continously without // pause - we haven't seen scenarios that do this yet so it's possible this threshold could also // be increased// privateconstint INITIAL_THRESHOLD = 0x100000; // 1 MB initial thresholdprivateconstint INTER_ALLOCATION_THRESHOLD = 850; // ms allowed between allocationsprivateconstint MAX_TIME_BETWEEN_COLLECTIONS = 30000; // ms between collections

/// <summary>/// Check the timers and decide if enough time has elapsed to/// force a collection/// </summary>privatestaticvoid ProcessAdd()
{
bool shouldCollect = false;
if (_totalMemory >= INITIAL_THRESHOLD)
{
//// need to synchronize access to the timers, both for the integrity// of the elapsed time and to ensure they are reset and started// properly//lock (lockObj)
{
//// if it's been long enough since the last allocation// or too long since the last forced collection, collect//if (_allocationTimer.ElapsedMilliseconds >= INTER_ALLOCATION_THRESHOLD
|| (_collectionTimer.ElapsedMilliseconds > MAX_TIME_BETWEEN_COLLECTIONS))
{
_collectionTimer.Reset();
_collectionTimer.Start();
shouldCollect = true;
}
_allocationTimer.Reset();
_allocationTimer.Start();
}
//// now that we're out of the lock do the collection//if (shouldCollect)
{
Collect();
}
}
return;
}
/// <summary>/// Forces a collection./// </summary>privatestaticvoid Collect()
{
//// for now only force Gen 2 GCs to ensure we clean up memory// These will be forced infrequently and the memory we're tracking// is very long lived so it's ok//
GC.Collect(2);
}

This beauty is calling GC.Collect(2) every 850ms if in between no Bitmap was allocated or every 30s regardless of how many Bitmaps were allocated. With .NET 4.5 we have got concurrent garbage collection which dramatically reduces blocking all application threads while a garbage collection is happening. For common application workloads a "normal" .NET application gets 10-15% faster by doing no change to the code. These improvements are all nullified by calling a forceful full blocking garbage collection.

To demonstrate the effect I have created a simple test application which allocates on a background thread 1 GB of small objects while on the UI thread we allocate one WPF bitmap every 850ms and compare that to allocating an "old" WinForms Bitmap object also every 850ms.

If you measure for some heap sizes you quickly see that your application will become dramatically slower the more memory it consumes due to the forced blocking garbage collection caused by WPF. The x-axis shows the managed heap size in MB and the y-axis shows the time needed to allocate 1 GB of small objects in a background thread.

You have a multi GB WPF application and the user experience is just awful and slow? You can google for good answers on Stackoverflow

which tell you that you need to use Reflection to set private fields in the internal MemoryPressure class of WPF. Not exactly a production grade "fix" to the issue.

But there is hope. The new public beta of .NET Framework 4.6.2 contains a fix for it. The MemoryPressure class is gone and your Stackoverflow "fix" will cause exceptions if you did not prepare for the impossible that Microsoft did dare to remove internal classes. WPF now adheres to the long time recommended GC.AddMemoryPressure call to tell the Garbage Collector that some managed objects also consume significant unmanaged memory.

With .NET 4.6.2 you finally get the possibility back to create snappy managed applications without long forced garbage collection pauses. You can measure the GC pause times with my custom WPA profile in no time:

That is nice but you can see with my custom WPA profile and the streamlined default.stacktags file even more:

There you can clearly see that while the managed heap grows the Induced GC times get bigger just as you would expect from the GC regions. To get the same view you need to download my simplified WPA profile which I have updated with the latest stacktags I found useful during past analysis. To make that active you need to open from the menu Trace - Trace Properties and remove current file and add the downloaded stacktags file. Or you can also simply overwrite the default.stacktags file that comes with WPA.

The new improved stacktags file gives you fast insights into your application and your system which is not really possible with other tools. With a nice stacktags file you can create your very own view of the system. The updated stacktags file contains tags for common serializers, exception processing, and many more things which are useful during analysis of performance issues or application failures.

I had an interesting case where a new WPF control was added to a legacy WinForms application. The WPF control worked perfectly in a test application but for some strange reason it was very slow in final WinForms application where it was hosted with the usual System.Windows.Forms.Integration.ElementHost. The UI did hang and one core was always maxed out. Eventually it built up after some minutes but even simple button presses did cause 100% CPU on one core for 20s. If you have high CPU consumption the vegetative reaction of a developer is to attach a debugger and break into the methods to see where the issue is. If you use a real debugger like Windbg you can use the !runaway command to find the threads with highest CPU usage

Eventually I would find some non waiting stacks but it was not clear if these were the most expensive ones and why. The problem here is that most people are not aware that the actual drawing happens not in user mode but in an extended kernel space thread. Every time you wait in NtUserWaitMessage the thread on the kernel side can continue its execution but you cannot see what's happening as long as you are only looking at the user space side.

If debugging fails you can still use a profiler. It is about time to tell you some well hidden secret of the newest Windows Performance Toolkit. If you record profiling data with WPR/UI and enable the profile Desktop composition activity new views under Video will become visible when you open the trace file with WPA. Most views seem to be for kernel developers but one view named Dwm Frame Details Rectangle By Type is different. It shows all rectangles drawn by Dwm (the Desktop Window Manager). WPA shows not only the flat list of updated rectangles and its coordinates but it draws it in the graph for the selected time region. You can use this view as poor mans screenshot tool to visually correlate the displayed message boxes and other windows with the performed user actions. This way you can visually navigate through your ETL and see what windows were drawn at specific points in your trace!

That is a powerful capability of WPA which I was totally unaware until I needed to analyze this WPF performance problem. If you are more an xperf fan you need to add to your user mode providers list

Microsoft-Windows-Dwm-Core:0x1ffff:0x6

and you are ready to record pretty much any screen rectangle update. This works only on Windows 8 machines or later. Windows 7 knows the DWM-Core provider but it does not emit the necessary events to draw the dwm rectangles in WPA. The rectangle drawing feature of WPA was added with the Win10 SDK Release of December 2016. Ok so we see more. Now back to our perf problem. I could see that only two threads are involved consuming large amounts of CPU in the UI thread and on the WPF render thread for a seemingly simple screen update. A little clicking around in the UI would cause excessive CPU usage. Most CPU is used in the WPF rendering thread

If that does not make much sense to you, you are in good company. The WPF rendering thread is rendering a composite window (see CComposition::Present) which seems to use a feature of Windows which also knows about composite Windows. After looking with Spy on the actual window creation parameters of the hosting WinForms application

it turned out that the Windows Forms window had the WS_EX_COMPOSITED flag set. I write this here as if this is flat obvious. It is certainly not. Solving such problems always involves asking more people about their opinion what could be the issue. The final hint that the WinForms application had this extended style set was discovered by a colleague of me. Nobody can know everything but as a team you can tackle pretty much any issue.

A little googling reveals that many people before me had also problems with composite windows. This flag does basically inverse the z-rendering order. The visual effect is that the bottom window is rendered first. That allows you to create translucent windows where the windows below your window shine through as background. WPF uses such things for certain visual effects.

That is enough information to create a minimal reproducer of the issue. All I needed was a default Windows Forms application which hosts a WPF user control. The complete zipped sample can be found on Onedrive.

When these three conditions are met then you have a massive WPF redraw problem. It seems that two composite windows cause some loops while rendering inside the OS deep in the kernel threads where the actual rendering takes place. If you let WPF use HW acceleration it seems to be ok but I have not measured how much GPU power is then wasted. Below is a screenshot of the sample Winforms application:

After was found the solution was to remove the WS_EX_COMPOSITED window style from the WinForms hosting window which did not need it anyway.

Media Experience Analyzer

The problem was solved but it is interesting to see the thread interactions happening while the high CPU issue is happening. For that you can use a new tool of MS named Media Experience Analyzer (XA) which was released in Dec 2015 but the latest version is from Feb 2016. If you have thought that WPA is complex then you have not yet seen how else you can visualize the rich ETW data. This tool is very good at visualizing thread interactions in a per core view like you can see below. When you hover over the threads the corresponding context switch and ready thread stacks are updated on the fly. If you zoom out it looks like a star field in Star Trek just with more colors.

If you want to get most out of XA you can watch the videos a Channel 9 which give you a pretty good understand how Media Experience Analyzer (XA) can be used

When should you use WPA and when Media Experience Analyzer?

So far the main goal of XA seems to be to find hangs and glitches in audio and video playback. That requires a thorough understanding of how the whole rendering pipeline in Windows works which is huge field on its own. But it can also be used to get a different view on the data which is not so easy to obtain in WPA. If threads are ping ponging each other this tool makes it flat obvious. XA is already powerful but I am not following entirely its UI philosophy where you must visually see the issue in the rendered data. Most often tabular data like in WPA is more powerful because you can sort by columns and filter away specific call stacks which seems not to be possible with XA. What I miss most in XA is a simple process summary timeline like in the first screenshot. XA renders some nice line graphs but that is not very helpful to get a fast overview of the total CPU consumption. If you look at the complete trace with the scheduler events and the per process CPU consumption in

XA

WPA

I am having a much easier time in WPA to identify my process with the table and color encoding. In XA you always need to hover over the data to see it actual value. A killer feature in XA would be a thread interaction view for a specific process. Ideally I would like to see all threads as bars and the bar length is either the CPU or wait time. Currently I can only see one thread color encoded on which core it is running. This is certainly the best view for device driver devs but normally I am not interested in a per core view but a per thread timeline view. Each thread should have a specific y-value and the horizontal bar length should show either its running or waiting time (or both) and with a line the readying thread as it is already done today.

That would be the perfect thread interaction view and I hope that will be added to XA. The current version is still a 1.0 so expect some crashes and bugs but it has a lot of potential. The issues I encountered so far are

If you press Turn Symbol Off while is still loading it crashes.

The ETL file loading time is very high because it seem to include some private MS symbol servers where the UI hangs for several minutes (zero CPU but a bit network IO).

UI Redraws for bigger (>200MB) ETL files are very slow. Most time seems to be spent in the GPU driver.

XA certainly has many more features I have not yet found. The main problem with these tools is that the written documentation only touches the surface. Most things I have learned by playing around with the tools. If you want share your experiences with WPA or XA please sound off in the comments. Now stop reading and start playing with the next cool tool!

The number of bugs produced by developers are legion but why are advanced debugging skills still rare in the wild? How do you solve problems if you do not have the technical know how to to a full root cause analysis across all used tech stacks?

Simple bugs are always reproducible in your development environment and can easily be found with visual debuggers in your favorite IDE. Things get harder if your application consistently crashes at customer sites. In that case often environmental problems are the root cause which mostly cannot be reproduced in the lab. Either you install a debugger on production machines of your customers or you need to learn how to use memory dumps and analyze them back home.

There are also many other tools for Windows troubleshooting available like Process Explorer, Process Monitor, Process Hacker, VMMap, … which help a lot to diagnose many issues without ever using a debugger. With some effort you can learn to use these tools and you are good to solve many problems you can encounter during development or on customer machines.

Things get interesting if you get fatal sporadic issues in your application which results in data loss or it breaks randomly only on some customer machines. You can narrow it down where the application is crashing but if you have no idea how you did get there then some industry best practice anti patterns are used:

You know the module which breaks and you rewrite it.

You do not even know that. If the problem is sporadic tinker with the code until it gets sporadic enough to be no longer an urgent problem.

That is the spirit of good enough but certainly not of technical excellence. If you otherwise follow all the good patterns like Clean Code and the Refactoring you still will collect over the years more and more subtle race conditions and memory corruptions in central modules which need a rewrite not because the code is bad but because no one is able to understand why it fails and is able to fix it.

I am surprised that so many, especially small companies can get away with dealing technical debt that way without going out of business. Since most software projects are tight on budget some error margin is expected by the customers they can live pretty well with worked around errors. I am not complaining that this is the wrong approach. It may be more economical to bring a green banana to market to see what the customers are actually using and then polish the biggest user surfacing features fast enough before the users will step away from the product. The cloud business brings in some fascinating opportunities to quickly roll out software updates to all of your customers with new features or fixes. But you need to be sure that the new version does not break in a bad way or all of your customers will notice it immediately.

Did you ever encounter bugs which you were not able to solve? What creative solutions did you come up with?

Call Screening is one of the main features of Google Pixel 3 which people have been dying to try out. But if you are on a limited budget, one XDA Developer has found a way to bring Call Screening on your current Google Pixel device. Thanks to XDA Senior Member coolsid8, the cool Google ...

I see a lot of different code and issues. One interesting bug was where someone did remove a few lines of code but the regression test suite did consistently report a 100ms slowdown. Luckily the regression test suite was using ETW by default so I could compare the good baseline with the bad one and I could also take a look at the code change. The profiling diff did not make much sense since there was a slowdown but for no apparent reason in the CultureInfo.CurrentCulture.DisplayName property did become ca. 100ms slower.

How can that be? To make things even more mysterious when they changed some other unrelated code the numbers did return back to normal. After looking into it more deeply I found that the basic application logic did not slow down. But instead some unrelated methods did just become much more CPU hungry. The methods were internal CLR methods named COMInterlocked::CompareExchange64 and COMInterlocked::CompareExchange64. The interesting thing is that it happened only under 32 bit and under 64 bit the error did go away. If you are totally confused by now you are in good company. But there is hope. I had a similar problem encountered already over a year ago. I knew therefore that it has something to do with the interlocked intrinsics for 64 bit operands in 32 bit code. The most prominent on 32 bit is

which is heavily used by the clr interlocked methods. To reproduce the problem cleanly I have written a little C program where I played a bit around to see what the real issue is. It turns out it is ……

Memory Alignment

A picture will tell more than 1000 words:

The CPU cache is organized in cache lines which are usually 64 wide. You can find out the cache line size of your CPU with the nice Sysinternals tools coreinfo. On my Haswell home machine it prints something like this

The most important number for the following is the LineSize of 64 which tells us how big the smallest memory unit is which is managed by the CPU cache controller. Now back to our slow lockcmpxchg8b instruction. The effect of the lock prefix is that one core gets exclusive access to a memory location. This is usually implemented on the CPU by locking one cache line which is quite fast. But what happens if the variable spans two cache lines? In that case the CPU seems to lock all cache lines which is much more expensive. The effect is that it is at least 10-20 times slower than before. It seems that our .NET application in x86 did allocate a 64 bit variable on a 4 byte (int32) boundary at an address that crossed two cache lines (see picture above). If by bad luck we are using variable 7 for a 64 bit interlocked operation we will cause an expensive global cache lock.

Since under 64 bit the class layout is usually 8 byte aligned we are practically never experiencing variables which are spanning two cache lines which makes all cache line related errors go away and our application was working as expected under 64 bit. The issue is still there but the class layout makes it much harder to get into this situation. But under 32 bit we can frequently find data structures with 4 byte alignment which can cause sudden slowdowns if the memory location we are hitting is sitting on a cache line boundary. Now it is easy to write a repro for the issue:

That is all. You only need to allocate on the managed heap enough data so the other data structures will at some point hit a cache line boundary. To force this you can try different byte counts with a simple for loop on the command line:

You can play with the little sample for yourself to find the worst performing version on your machine. If you now look at WPA with a differential view you will find that CompareExchange64 is responsible for the measured difference:

Since that was a such a nice problem here is the actual C Code I did use to verify that the issue only pops up only at cache line boundaries:

… The integrity of a bus lock is not affected by the alignment of the memory field. The LOCK semantics are followed
for as many bus cycles as necessary to update the entire operand. However, it is recommend that locked accesses
be aligned on their natural boundaries for better system performance:
• Any boundary for an 8-bit access (locked or otherwise).
• 16-bit boundary for locked word accesses.
• 32-bit boundary for locked doubleword accesses.
• 64-bit boundary for locked quadword accesses. …

The word better should be written in big red letters. Unfortunately it seems that 32 bit code has a much high probability to cause random performance issues in real world applications than 64 bit code due to the memory layout of some data structures. This is not an issue which makes only your application slower. If you execute the C version concurrently

start cmpxchg.exe && cmpxchg.exe

Then you will get not 1s but 1,5s of runtime because of the processor bus locking. In reality it is not as bad as this test suggests because if the other application uses correctly aligned variables they will operate at normal speed. But if two applications exhibit the same error they will slow each other down.

If you use an allocator which does not care about natural variable alignment rules such as the GC allocator you can run into issues which can be pretty hard to find. 64 bit code can also be plagued by such issues because we have also 128 bit interlocked intrinsics. With the AVX2 SIMD extensions memory alignment is becoming mainstream again. If people tell you that today memory alignment and CPU caches play no role in todays high level programing languages you can prove them wrong with a simple 8 line C# application. To come to an end and to answer the question of the headline: No it is not a CPU bug but an important detail how the CPU performance is affected if you use interlocked intrinsics on variables which span more than one cache line. Performance is an implementation detail. To find out how bad it gets you need to measure for yourself in your scenario.

issues. A special case is network which falls into the wait issue category where we wait for some external resource.

When you download my simplified profile and apply it to the the provided sample ETL file you get can analyze any of the above issues within much less time. Here is a screen shot of the default tab you will see when you open the ETL file.

Stack Tags are Important

The first and most important graph is CPU Usage Sampled with "Utilization by Process And StackTags" which is a customized view. It is usable for C++ as well as for .NET applications. If you ever did wonder what stack tags are good for you can see it for yourself. I have added a stack tag named Regular Expression which is set for all all calls to the .NET Regex class like this:

If more than one tag can match the deepest method with the first stack tag is used. This is the reason why the default stack tag file is pretty much useless. If you add tags for your application they will never match because the low level tags will match long before the tags in your own code could ever match. You have to uncomment all predefined stuff in it. If you use my stack tag file you need to remove WPA provided default.stacktags under Trace - Trace Properties - Stack Tags Definitions. In practice I overwrite the default file to get rid of it. If you leave it you will get e.g. very cheap CPU times for a GC heavy application because all call stacks of your application which will trigger GC stuff are added to the GC stack tag. This makes your application specific stack tags look much slimmer than they really are because your application tags come later in the call stack which are not used to tag your stacks.

Why would I need a custom stack tag file? It makes it much easier to tag high level expensive operations of your application so you can see how much time you were spending with e.g. loading game data, decompressing things, rendering, … This makes it easy to detect patterns what your application was doing at a high level. Besides if you find a CPU bottleneck in your application you can add it under a e.g. a "Problems" node so you can document already known issues which are now easy to spot.

For our problematic application PerformanceIssuesGenerator.exe we see that it was doing for 17,5s CPU Regex stuff (Weight in View is ms in time units). To see how long the actual running time was we need to add the Thread ID column since currently we sum the CPU time of all threads which is not the actual clock time we spent waiting for completion.

The context menu is actually customized. It is much shorter and contains the most relevant columns I find useful. If you want more of the old columns then you can simply drag and drop columns from the View Editor menu which is part of all WPA tables. If you want to remove additional columns you can also drag and drop columns back to the left again. This way you can streamline all of your column selection context menus which is especially useful for the CPU Usage Precise context menu which is huge.

Select A Time Range to Analyze

Now we see that we have two large Regex CPU consumers with a large time gap in between. But what was the application actually doing? This is where marker events from your own application come in handy so you know what high level operation e.g. the user did trigger and how long it did take. This can be achieved with a custom ETW event provider or the special ETW marker events which WPA can display in the Marks graph if any of them are present in your ETL file. To be able to use them to navigate in the ETL file your application must of write them at interesting high level time points which indicate for example the start and stop of a user initiated action. For .NET applications the EventSource class is perfectly suited for this task. Marker events can be written with a custom PInvoke call to

Here is the code to write a marker event in C# which shows up in all kernel sessions. The TraceSession.GetKernelSessionHandles is adapted from the TraceEvent library. If you have "NT Kernel Logger" sessions only (e.g. if you use xperf) then you can use 0 as session handle to write to it.

Now that we have our marks we can use them to navigate to key time points in our trace session:

We see that the first CPU spike comes comes from RegexProcessing which did take 3,1s. The second regex block was active between the Hang_Start/Stop events which did take 2,7s. This looks like we have some real problems in our PerformanceIssueGenerator code. Since we have according to our ETW marks Regex processing, many small objects, many large objects and a hang with simultaneous regex processing we need to select one problem after the other so we can look at each issue in isolation. That is the power which custom ETW providers or ETW Marks can give you. Normally you are lost if you know that you have several issues to follow up. But with application specific context marker events you can navigate to the first regex processing issue. To do that select the first event by and then hold down the Ctrl key while clicking at the stop event to multi select events. Then you can right click on the graph to zoom into the region defined by the first and last event.

Analyze CPU Issue

When you now look at the CPU consumption of the busiest threads we find 2,7s of CPU time. At the bottom WPA displays the selected duration which is 3,158s which matches quite well the reported timing of 3,178s. But the reported thread time of 2,7s is not quite the observed duration. In the graph you see some drops of the CPU graph which indicates that for some short time the thread was not active possibly waiting for something else.

Wait Chain Analysis

That calls for a Wait Chain Analysis. If you scroll down you will find a second CPU graph with the name CPU Usage (Precise) Waits. This customized graph is perfectly suited to find not only how much CPU was consumed but also how long any thread was waiting for something. Please note that this graph does not replace the CPU Usage Sampled graph. I have explained the difference between both CPU graphs already earlier. The column selection context menu of this graph has been massively thinned out to keep only the most relevant columns. Otherwise you would have to choose from over 50 items and the context menu even has scroll bars! Now we have only process, thread id, thread stack as groupings. Next comes a list of fixed columns which are always visible because they are so important. There we see how long each thread did wait for something (WaitForSingleObject …) as total and maximum wait time and the CPU usage in milliseconds. If we sum up for the most expensive thread the wait time of 0,385s and 2,726s of CPU time we get 3,111s which is within a small error margin exactly the time we did get by measuring the start and stop of our regex processing operation.

Is this conclusion correct? Not so fast since a thread can be only running or waiting (it can also wait in the ready queue which is only relevant if you have more threads running than cores) the sum of CPU and Wait times for each thread will always add up to 3,1s because this is the time range you did zoom into. To actually prove that this time is really waited while we were regex processing we have to sort by the wait column and then drill down for the thread 10764.

When we do this we see that all waiting did occur while the DoRegexProcessing delegate was called. It was waiting on the regular expression JIT compilation to complete. Now we have proven that that wait time is really spent while executing the regex processing stuff. If we would like to optimize the total running time we have two options: Either we could use more threads to parallelize even further or we need to tune our Regular expression or replace it by something else. Before going down that route you should always check if this string processing is necessary at all. Perhaps strings are not the best data structure and you should think about your data structures. If you need to sue strings you still should verify that the regex processing was really necessary at this point of time. Eventually you do not need the results of this regex processing stuff right now.

A Garbage Collection Issue?

In the list of marker events we see that the first regex issue overlaps with a GenerateManySmallObjects operation. Lets zoom into that one and check out the what we see under CPU usage. There we see that we are consuming a significant amount of CPU in the Other stack tag which categorizes unnamed stacks into its own node. If we drill into it we find out allocating method PerformanceIssueGenerator.exe!PerformanceIssueGenerator.IssueGenerator::GenerateManySmallObjects. That is interesting. Was it called on more than one thread?

To answer that question it is beneficial to select the method in question and open from the context menu View Callers- By Function

This will let you start to drill up to all callers of that method which is useful to find the total cost of a commonly used method (e.g. garbage_collect …). This is still the total sum for all threads. Now we need to bring back out Thread Ids to see on which threads this methods was called.

If the Thread ID column has more than one node beneath it it will be expandable like I have shown above. This proves that only one thread was calling this method and it did use 1,3s of CPU time. But there are still 0,4s of wait time missing. Where was the rest spent? We could use the Wait graph again to see where the wait operation was hanging around but since we know that it was allocating things we can open the Region for Garbage collections

And we just have found that during that time the garbage collector did block our application of 0,4s which explains the missing time in our CPU graph. We also know that we had 75 GCs during that time which is a rather high number. We can even see how many GCs of which generation we did have during that time by using my (also hand rolled) Generic Events GC Generations graph:

That's all about the small object allocation issue. Now lets go back by doing "Undo Zoom" in the graph to get our previous time range back where we can do the same analysis for the Generate Many Large objects issue. This is not terribly interesting which I leave to you as exercise.

UI Hang (2,727s) Analysis

Now lets investigate why our UI did hang. The hang operation has nice markers which we can use to zoom into the region of interest.

We are having a lot of CPU intensive threads here. If the UI was hung it must either using much CPU on the UI thread or the UI thread was blocked for some reason. If you know that the UI thread was started on your Main method you can search for CorExeMain which is the CLR method which calls into your main method. Or you search for a typical window method like user32.dll!DispatchMessageWorker. With that we quickly find thread 11972 as the thread which was hung in:

Obviously it was waiting for a task to complete. If you call Task.Wait on your UI thread you will block further UI message processing and your UI will not be able to redraw anymore. At least that was the visible observation. The Hang did take 2,727s which exactly matches up the total summation of all wait operations with 2,724s. If that is true we should see as maximum wait time the same the same time but there we have only 1,383s. When using WPA you can look deep into the inner workings of Windows and .NET. Lets be a little curious and check out why there were not one Wait operation but 30 waits while we were blocked in Taks.Wait. We observere that Task.Wait calls into the current Dispatcher to delegate the wait to its SynchronizationContext. This in turn calls into the current CLR thread to DoAppropriateWait which in turn calls on the UI thread MsgWaitForMultipleObjectEx. This method can block a UI thread but leave e.g. mouse events or COM messages through. Depending on which which parameters you call it it can even pump your UI thread with message which is sometimes necessary. This can lead to unintended side effects that you can execute code while you were waiting for a blocking operation. I have found such issues in Can You Execute Code While Waiting For A Lock? and Unexpected Window Message Pumping Part 2.

Digging Deeper

We know that MsgWaitForMultipleObjects can pump messages with the right flags. So what was happening in these 30 wait operations? If you open the column context menu you can add Ready Thread Stack which gives you all call stacks which did cause MsgWaitForMultpleObjects or its child methods to wake up.

Poor other tools. They will never be able to tell you that MsgWaitForMultipleObjects was woken up 28 times to by dwm.exe which is the Desktop Window Manager to tell our wait method: Here you have 28 mouse events which you can process or not depending on how your wait flags were configured. The last event which did finally end our Task.Wait operation was the last thread that did finish processing. This is the call where our Wait operation did wake up our blocked message loop.

You can ask: How do you know that? Well the Switch-In Time column of our event in the table which is to the right of the yellow bar with 14,425 tells me this. This is were our wait operation did end and our thread was woken up again. This is also the time where the CPU consumption has a sharp drop which is no coincidence. If you have sharp eyes you will notice in the graph at the right a blue line. This is when our thread was woken up. Every thread wake up is displayed in the graph which makes it much easier to spot regular patterns of activity. It is this wealth of information which makes WPA so powerful. But the default profile is mainly aimed at kernel developers which is a pity because it is an excellent system wide profiling tool which gives you deep insights into how Windows really works and sometimes how Windows or your application breaks.

If you have counted you will still miss one ready event. This one was Visual Studio which did send us a window message from vslog.dll!VSResponsiveness::Detours::DetourPeekMessageW. I am not sure why Visual Studio does this but if you profile your system more often you will notice that the Visual Studio responsiveness stuff gets from time to time into a loop and all Visual Studio instances will send each other messages (still the case with VS2015). This does not show up as a surge of CPU consumption. But it will increase your ETL file size considerably because you will get millions of context switches. This is not a big problem except for battery powered devices which have a higher power consumption than they could have and for people profiling their machines getting huge files.

.NET Exception Analysis

The previous analysis was a bit complex. Now for something nice and simple. You can easily check if your .NET application throws exceptions if you enable the .NET provider. With WPRUI you need to check ".NET Activity" to get these events and many others. For a short list of interesting .NET events you can check out the ETW manifest of .NET for yourself or you read Which Keywords Has An ETW Provider Enabled? which covers the most interesting .NET events as well. If you zoom into the Do Failing Things region which is visible as ETW Marker event in the Marks graph. Then you need to add a Generic Events Graph and change the graph type from "Activty by Provider, Task, Opcode" to .NET Exceptionswhich is only part of my WPA profile.

This gives you a nice view of all thrown exceptions in all .NET applications. You can play with the groupings and group e.g. by exception type and message to see how many different ones you have. To do that you only need to Drag the "Exception Message (Field 2)" and drop it to the right of ExceptionType. Now you have Process - Exception Type - ExceptionMessage in a nice cohesive view. You just have to remember that all columns to the left of the yellow bar are the columns which are grouped together by their values.

Starting with .NET 4.6 you can also use the .NET Exceptions Caught view which shows you all thrown exceptions and in which method these exceptions were caught.

This can be interesting for automated regression tests if something breaks because an exception which was always happening is now caught in a different method which might be too late in the call chain. Having an exception is nice but the most interesting part of it is its call stack. From this graph we know the Thread on which it was happening. We do not need the process id because each thread Id is unique across all processes. Then you can zoom to a specific exception an investigate in CPU Usage (Precise or Sampled) the call stacks if something related to exception handling shows up. Searching for the method KernelBase.dll!RaiseException is usually a good way to find it. If that did not help you need to enable stack walking for the .NET provider. I blogged about how to do that a long time ago for a custom provider with xperf in Semantic Tracing for .NET 4.0. If you did record the provider "Microsoft-Windows-DotNETRuntime" with stack walking enabled you can add in the View Editor from the column context menu

the Stack column to display the call stack of all thrown exceptions. An alternative is to enable the StackKeyword+ExceptionKeyword of the "Microsoft-Windows-DotNETRuntime" ETW provider. This will cause .NET to write an additional ETW event for every .NET event with a managed stackwalk event. This event is not human readable but PerfView can decode it. This is one of the things where PerfView is still the best tool to investigate managed application issues.

Conclusions

Although this is already a rather long post you still have only seen a small fraction of what you can do with ETW profiling and the WPA tool. I hope that I have convinced a few more people out there to try it out because it can help you to find the problems you never knew that they existed. But these problems might be ones your customers are always complaining about.

Did you ever see a nice tool with a fancy UI and thought: Hey that thing is powerful. I will try it! But later you were left back scratching your head why other can use this tool and get great results but you just don't get it?

Then I have news for you I have created a WPA profile which aims at user and not kernel mode developers. Many columns were removed from the context menus to give you all the power you need to find issues in your application

where no one else has found the root cause.

The ETW profile can be downloaded fromhere as a zip file. Unpack it to a directory and you are ready to go. So what is inside it?

File

Description

Simple.wpaProfile

The main WPA profile you can use now.

JIT.xml

WPA region file referenced by Simple.wpaProfile to get JIT times of your application like PerfView.

GC.xml

WPA region file referenced by Simple.wpaProfile to get Garbage Collection metrics like in PerfView only better!

HookRegions.xml

WPA region file referenced by Simple.wpaProfile to see mouse clicks in your ETL file when ETWControler is running and capturing your keyboard and mouse events.

default.stacktags

Stacktag file which serves as base for your application to find common .NET issues like event handle leaks, too much regex usage, monitor contention and much more.

Besides this there are also some WPR profiles added to enable recording of specific .NET events like exceptions together with GC events into a larger 1 GB buffer.

WPR of Windows 10 has under Scenario Analysis .NET Activity which has a very small 160MB ring buffer which is way to small for my needs. It might be good

for you but I have added extra WPR profiles for GC and JIT as well.

To show you the difference I have created a small PerformanceIssueGenerator.exe application. This generates various issues which you can analyze with WPA.

I have recorded the data already and put the 7z file here. If you want to extract it you need to download the 7z from the official site. To view the ETL file

you need a >= Windows 8 machine and Windows Performance Toolkit from the Windows 10 SDK.

When you have downloaded the profile and the sample etl file you can apply the profile under Profiles - Apply…

Then you get two tabs. The first one contains .NET specific analysis stuff like GC and JIT. The second tab can be used for the usual bottleneck analysis regarding

CPU, disk and memory usage as well as wait chain analysis. You do not need to load the profile every time. You can save it as your default profile

by clicking on Save Startup Profile to open all of your ETL files in this view now.

Normally I use PerfView for GC issues to check out the GC Stats of an application to see if anything significant is going on. If you have ever used PerfView then

you will have noticed that it is a great tool combined with a crappy UI. At least for some key scenarios we can now use WPA with hand crafted region files instead of PerfView.

GC Views

You now get a GC view like this

You can visualize each GC generation type and its time it did take to execute. The Count column also tells you how many GCs you did have. This can help a lot if you

want to get GC metrics only for a specific time region in your application. Now it is easy to see how much time of your use case was available to your application

and how much time your threads did have to wait for the garbage collector. This was my main issue with PerfView and its GCStats view that it is

calculated for the complete duration of the profiling run. Most of the time I want GC metrics only for specific parts of my application because

I am optimizing mostly only at a single place at one time.

Here is how you get the GC statistics in PerfView:

The numbers in GC Rollup By Generation match pretty well to the region file. Also the GC pause time correlate quite well with the distribution of generations although the timings

are not 100% the same but the ratios are a good fit.

Since PerfView and ETW use the same data you can rightfully ask the question why there is a difference at all? The answer is that WPA sums not up all GC regions

by their duration. A WPA region is defined by a start and a stop event which is the displayed in the UI like above. But if the regions happen in parallel in several threads WPA will use as sum

only the time where at least one region was active at a time. This diagram illustrates how WPA region summation works:

This detail is for GC things not so relevant but it will be very important when we look at JIT statistics. Since a foreground GC is blocking your complete application there is little

concurrency going on. We can also visualize when GCs of which types are happening. The view Generic Events GC Generations will show you how many GCs of which type were happening.

Initially it is collapsed so you have to open it. This view takes a little while to build up since it needs to parse all ETW events for GC Start events which contain as data point the generation number which is

visualized here.

This is the greatest thing about WPA of Windows 10. Now you can visualize the data of your custom traces. With a little WPA graph configuration metric you can create for your own application key graphs

which show e.g. Frames per seconds, # of db transactions, … Who needs performance counters when you can visualize your trace data much better now?

Marker Events

A key point to know where you want to start looking for an issue are Marks. In WPRUI you can press Ctrl+Win+x to open a window which allows you to write a marker event to your trace file

while you are recording data. Unfortunately this functionality is not exposed to the outside world to set marks programmatically. If you revert to xperf you can set with xperf -m marks programmatically

if you wish to. But since marks are so useful to navigate in an ETL file I really wonder why the method which xperf calls is not documented at all. If you want to know how real power users

are using WPA then you need to call wpa /?

I guess at Microsoft they let their tests run with enabled profiling while saving screenshots as ETW events. If something happens the ETL file is downloaded

from a central repository and the ETL file is opened with a test specific profile. The file is opened and zoomed into the relevant test part which are identified by markers

or regions from a region file. At least some people at Microsoft use this tool so often that it makes sense to automate it even further since the controls of WPA are UI automatable to

script nearly your complete analysis. Now THAT is what I call good performance regression testing.

A little reverse engineering which APIs xperf -m calls did finally show up the method EtwSetMark which is exported by ntdll.dll. I am sure the readers of my blog can figure out the rest as well.

But I really wish it would become an officially supported API since it is so useful. Sure you can define your own marker events as well but since the support in WPA is already built in

it would really help. It would also be nice if xperf would emit the mark event no only to the "NT Kernel Logger" ETW session but all active kernel session so you could mark also the WPA kernel session

which is currently not possible.

JIT View

PerfView can also give you the timings how much time each method did take to compile. This is useful if your are compiling too much code on the fly for your WCF proxies or your serializers. Sure it is done only once

but if this time plays a significant part of your application startup you should rethink how your application works. Since .NET 4.5 will NGen all assemblies on the fly if they are used often enough you do not need to

consider using NGen explicitly for your application. But if you have much dynamic code generation going on you can still suffer from long JIT times.

You can get a similar view by switching from Garbage Collection to JIT Time per Thread

As I have shown you in the GC section. The summation of regions is not a simple sum of the duration of all JIT events. The JIT compiler can compile code in many different threads concurrently.

The Duration sum of all threads of JIT time reported by WPA is therefore largely irrelevant if you want to compare two different runs of your application. Instead you need to look at the JIT times

of each thread. You can copy the duration column with the context menu "Copy Column Selection" into Excel

which gives us the exact same JIT time as Perfview. Now I do not need PerfView for JIT analysis anymore. I am more and more getting away from programming to configuring WPA to give me just the view I need.

If you expand JITs you get at least the namespace of each JIT event. This is the best I could come up with since WPA does not support concatenating strings of different fields into a region name.

But you still can open the also customized view _Activity by Provider Task Process to view the "raw" data if you need more information. If you would drop e.g. Field 5 to the left of the yellow bar you would get a nice summary how

many methods in this namespace were compiled.

As you can guess there are lots of more goodies inside the now published profiles to make WPA really usable for non kernel developers. I have streamlined nearly every context menu where I removed all useless or kernel only columns

from the context menus. You have a much easier time now to concentrate on the important things. With this profile WPA is nearly a new application and central analysis station for managed and unmanaged performance analysis.

In the next posts I will walk you through the problems of PerformanceIssuesGenerator and how you can visualize them in WPA efficiently.

It has been a long wait to finally get the new version of the new Windows Performance Toolkit. I had not much time to test the betas since until now managed call stack resolution was not working.

WPT for Windows 10 makes some nice progress which you can download here. Click on the Download Standalone SDK and run it. Then uncheck everything except Windows Performance Toolkit and

it will install on your machine within minutes. If you download the complete Windows 10 SDK you are not downloading 170MB but several GB.

Management Summary

Graphs are more readable.

Graphs with short spikes are much better rendered without the need to zoom into to see them.

Colors for graphs are changeable (finally).

Filtering can be undone now in the context menu.

Quick Search in all tables.

Symbol load dialog was improved.

You can configure your own stack tag files in Trace Properties.

No new Graph Types (at least with the default providers I enable).

5 Added columns to CPU Usage Sampled

Two could be extremely useful: Source File Name and Source Line Number

12 Added columns to CPU Usage Precise.

Two are to user mode developers useful: ProcessOutOfMemory and CPU Usage (in view).

Occasional crashes are also featured now.

CPU Usage Sampled

The biggest improvement is certainly the possibility to see source file and line numbers. You can find out this way where most CPU is spent on which line if the CPU consumption happens in your code.

A quick recap for the newbies: The CPU Usage Sampled graph is generated by taking the stack trace of all running threads 1000 times per second (this is the default). These call stacks are then added together.

A count in your Main method of 1000 means that the method has used one second of CPU time.

If the call stack ends not in your e.g. Main method when the stack trace is taken then your source file will not be displayed there. As usual you have to take a sharp look at the numbers. Your method

may be causing a high CPU consumption but it might never show up with a source and line number because the stack always ends in an external library call (printf in my case) for which I did not have the pdb loaded.

It would be nice to have file and line numbers for managed code as well but this feature is of limited use as it is now.

Ideally I want to see this stuff while I am drilling into a call stack all my way down and not only if the call stack ends in a method which I did compile.

For Reference here is the complete list of columns you can choose from. Green are unchanged columns, Bold are new ones. I never have found out how to give a thread a name in WPT.

If anyone knows more about the mystical thread names and how they can be set I would be very interested in it.

WPT 8.1

WPT 10

% Weight

% Weight

Address

Address

All Count

All Count

Annotation

Annotation

Compiler Optimization

Compiler Optimization

Count

Count

CPU

CPU

Display Name

Display Name

DPC/ISR

DPC/ISR

Function

Function

Image RVA

Inlined Functions

Inlined Functions

Is PGO'ed

Is PGO'ed

Module

Module

PGO Counts

PGO Counts

PGO Dynamic Instruction Count

PGO Dynamic Instruction Count

Priority

Priority

Process

Process

Process Name

Process Name

Rank

Rank

Section Name

Section RVA

Source File Name

Source Line Number

Stack

Stack

Table

Table

Thread Activity Tag

Thread Activity Tag

Thread ID

Thread ID

Thread Name

Thread Name

Thread Start Function

Thread Start Function

Thread Start Module

Thread Start Module

TimeStamp

TimeStamp

Trace #

Trace #

Weight

Weight

Weight (in view)

Weight (in view)

CPU Usage Precise

This is by far the most complex table. With WPT 10 we have 66 columns to choose from to find out how our threads interact with each other.

I am not sure how WPA detects that a process was out of memory but I think the kernel knows quite well if something did go wrong. The ProcessOutOfMemory column could be handy

to check stress tests when the machine was under heavy load which could explain subsequent application failures.

CPU Usage (in view) seems to be an attempt to display a more true thread running time. I am not sure how this value is calculated but it seems that if all cores are in use and you

get significant Ready times then it differs quite a lot. Otherwise the differences are marginal.

There have been quite some renaming of columns which might render your custom filter and enablement conditions useless and WPA will show you an error.

If your old preset did enable by default all processes except the Idle process which clutters up the view it was set to:

([Series Name]:="NewProcess" AND NOT ([NewProcess]:="Idle (0)"))

With WPT 10 you need to change it to:

([Series Name]:="New Process" AND NOT ([New Process]:="Idle (0)"))

This happens to all users of WPT 8.1 which have saved a custom default profile which contains this enablement rule.

With an empty enablement rule your view will look like

and with the rule

you will see no difference when you apply it. Why? Because this rule only enables matching stuff but it does not disable the non matching

processes! I am with you that this is very counter intuitive but it is the way it is. If you save this as new default profile and then open a new file

you will no longer get a view where the Idle process is enabled by default.

For reference I list here all available columns with the old and new names so you have an easier time to update your custom filters, enablement

and expansion rules.

Green lines are unchanged, yellow are renamed columns and bold are new columns in WPT 10.

CNET | At the company’s Build developer conference in San Francisco, Microsoft’s Joe Belfiore demos a new addition to the Windows Phone 8 operating system, a personal assistant called Cortana. note | The character design for the Microsoft Windows Phone smart digital assistant Cortana is based on Microsoft Studios’ popular Halo game series, in which [...]

If at first you don’t succeed, try … turning your condo project into a rental? That’s the approach Cohen Goldstein Investment Strategies is taking with its condo development at 318 East 81st Street. The project is now being marketed as a “boutique collection of six full-floor rental residences” — and some previous sales listings have been taken down. In a condo plan filing last year, the developer was aiming for a $25 million sellout. Four […]

New York real estate developers gave the Democratic State Campaign Committee more than $270,000 in campaign contributions during just 10 days last month, hedging their bets as Democrats seek to take back control of the state Senate after Tuesday’s general election. City real estate concerns, which typically favor Republicans, accounted for more than 40 percent of $674,000 sent to the DSCC during the time period, Politico reported. For some developers, the choice to give to […]

Sheldon Solow finally ended his protracted battle with the Metropolitan Antiques store on West 57th Street, paving the way for the developer to build his mixed-use Billionaires’ Row tower. Metropolitan Antiques, which pleaded guilty to illegally selling $4.5 million in elephant ivory, suddenly vanished from its location at 10 West 57th Street, the New York Post reported. The store had been the last holdout standing in the way of Solow’s plans to build a 52-story […]

Gary Barnett’s secretive Diamond District development, it turns out, will be a hotel with several hundred rooms. And the developer is now ready to move ahead with those plans after securing the last properties he needs in order to build. Extell Development closed on the four-story former home of the Plaza Arcade diamond mini-mall at 32 West 48th Street for $40 million, the New York Post reported. The developer also purchased several nearby buildings and […]

It's been a while since Final Fantasy VII's remake was announced at Sony's E3 conference and the game has only been seen once in video form since then, with nary a peep in the last few years. The game has changed developers and a lot of the work has reportedly been completely restarted, which gives director Tetsuya Nomura an opportunity to rethink Final Fantasy VII's conventions. One way of doing that, Nomura says, is to consider which of the Final Fantasy VII compilation titles might be better wrapped into the main story.

At a closed event for the Switch port of The World Ends With You, the attending Tetsuya Nomura was asked about one of the Compilation games. Nomura's response, translated by Gematsu, seems to wistfully wonder aloud how those games should fit into the FFVII remake.

“Not everyone may know this, but I’m remaking Final Fantasy VII," Nomura said with a laugh. "Right now I’m concentrating on Kingdom Hearts III, but when that’s finished, VII will be where I head next. I’m thinking about ideas regarding the remake’s release — I even spoke to producer Kitase about it today. All of us old-timers are considering various developments in regards to what accompanies the remake. Like if we can manage to do something about the Compilation [of Final Fantasy VII] titles too. But for the time being, please wait for VII‘s turn to come.”

The FFVII Compilation was a series of side games in usually different genres that Square Enix started releasing through the 2000s. Those games included pre-smartphone mobile game Before Crisis, which told the story of the Turks; Dirge of Cerberus, a post-FFVII action shooter starring Vincent Valentine; and Crisis Core, a prequel focusing on SOLDIER member Zack. There were also two movies, Advent Children and Last Order, that count as part of the Compilation.

While Nomura has a penchant for strange names, the FFVII Compilation titles mostly follow the naming scheme of AC, BC, CC, and DC, with Last Order then screwing everything up.

We are looking for a talented System Administrator/Web Designer to relocate to the beautiful Yukon! We are looking for a new addition to be part of the network... $100,000 - $120,000 a yearFrom Indeed - Wed, 10 Oct 2018 23:18:50 GMT - View all Whitehorse, YT jobs

If you wanted a job as a designer or as any creative professional several years ago, you would need to print out your work and build a physical portfolio. A lot of the decision on whether you would get the job would usually come from the printed portfolio. Today, it’s still important to have a portfolio, but you won’t want to print it. With web developers, many potential clients or employers will make their decision based off of the presentation of your online portfolio.

.NETReflector может использоваться для нахождения мест, имеющих проблемы с производительностью и поиска багов. Он также может быть использован для поиска зависимостей сборки. Программа может быть использована для эффективной конвертации кода между C# и VB.NET.description (en).NET Reflector is a class browser, decompiler and static analyzer for software created with .NET Framework, originally written by Lutz Roeder. MSDN Magazine named it as one of the Ten Must-Have utilities for developers, and Scott Hanselman listed it as part of his "Big Ten Life and Work-Changing Utilities"..NET Reflector was the first CLI assembly browser. It can be used to inspect, navigate, search, analyze, and browse the contents of a CLI component such as an assembly and translates the binary information to a human-readable form. By default Reflector allows decompilation of CLI assemblies into C#, Visual Basic .NET, C++/CLI and Common Intermediate Language and F# (alpha version). Reflector also includes a "Call Tree" that can be used to drill down into intermediate language methods to see what other methods they call. It will show the metadata, resources and XML documentation. .NET Reflector can be used by .NET developers to understand the inner workings of code libraries, to show the differences between two versions of the same assembly, and how the various parts of a CLI application interact with each other. There are a large number of add-ins for Reflector.

.NET Reflector keygen can be used to track down performance problems and bugs, browse classes, and maintain or help become familiar with code bases. It can also be used to find assembly dependencies, and even windows DLL dependencies, by using the Analyzer option. There is a call tree and inheritance-browser. It will pick up the same documentation or comments that are stored in xml files alongside their associated assemblies that are used to drive IntelliSense inside Visual Studio. It is even possible to cross-navigate related documentation (xmldoc), searching for specific types, members and references. It can be used to effectively convert source code between C# and Visual Basic.

.NET Reflector crack has been designed to host add-ins to extend its functionality, many of which are open source. Some of these add-ins provide other languages that can be disassembled too, such as PowerShell, Delphi and MC++. Others analyze assemblies in different ways, providing quality metrics, sequence diagrams, class diagrams, dependency structure matrices or dependency graphs. It is possible to use add-ins to search text, save disassembled code to disk, export an assembly to XMI/UML, compare different versions, or to search code. Other add-ins allow debugging processes. Some add-ins are designed to facilitate testing by creating stubs and wrappers. Interface languages: EnOS: Windows 10/8/7 (32bit-64bit)Homepage: www.red-gate.comскачать бесплатно / free download .NET Reflector 10.1 + crack (keygen) ~ 10 Mbver. 10.1.0.1125Скрытый текст!Подробности на форуме...

This project are long term process, no deadline, Ongoing project! no budget limits! immediately effect! My company have a number of student record around 400 that need to enrol in different University... (Budget: $250 - $750 USD, Jobs: Database Programming, Java, MySQL, Software Architecture, Web Scraping)

I'm a small startup land surveyor and I need a web based tracking / queue system to upload docs and organize work flow through a chain of employees. Clients should also be able to login and view status changes on their orders... (Budget: $1500 - $3000 USD, Jobs: Software Development)

An entity tied to Jorge Peréz’s Related Group and Rockpoint Group just bought the 73-acre development site for Water Tower Commons, a master-planned rental community in Lantana initially proposed nearly three years ago. The company, Lantana I Owner LLC, paid $14.76 million for the site at the northeast corner of West Lantana Road and Andrew Redding Road. The developers broke ground on the first phase of the project with the backing of a $52.2 million ... [more]

Developer Dan Deitchman just bought an oceanfront lot in Fort Lauderdale Beach as part of a $6.9 million off-market deal. The trade for the 20,480-square-foot residential property at 2300 North Atlantic Boulevard breaks down to about $340 per square foot. Records show attorney Shane Kelley sold the lot on behalf of the estate of the late Irwin Berliner. The property last sold for $1.18 million in 1989. Records show the site previously held a four-bedroom, ... [more]

It looks like "PlayerUnknown's Battlegrounds," the game whose success paved the way for "Fortnite: Battle Royale," is coming to the PlayStation 4.

The game hasn't officially been announced, but files for the game are present in the PlayStation 4 game database, and the online PSN store.

"PUBG" is one of the most popular action games on PC, but it's been console exclusive to the Xbox One for the past year.

It looks like "PlayerUnknown's Battlegrounds," the game that is largely credited with sparking the popularity of battle royale shooting games like "Fortnite: Battle Royale," is set for release on PlayStation 4.

While Bluehole, the game's developer, has yet to confirm a PS4 release date, fans have discovered files on PlayStation 4 consoles and in Sony's online PlayStation Network store. Last month the South Korean Game Rating and Administration Committee leaked ratings for a PlayStation 4 version of the game as well. A representative for the game declined to comment.

"PlayerUnknown's Battlegrounds," or "PUBG," was officially released on PC in May 2017 and has been console exclusive to the Xbox One since December 2017. The game was in Microsoft's Xbox Game Preview program until September 4th, when version 1.0 was officially released. The mobile version of the game is also one of the most popular video games in China.

Like "Fortnite: Battle Royale" and other games that it inspired, "PlayerUnknown's Battlegrounds" throws 100 players onto a single map with scattered resources. Players need to find weapons and items to defend themselves as the safe areas of the map begin to shrink. The last player or team surviving at the end of the round is the winner.

Though "PUBG" helped pioneer the battle royale genre, the game has seen its star wane, even as rivals like "Fortnite" have skyrocketed to success and challengers like "Call of Duty's" Blackout and "Battlefield V's" Firestorm continue to crop up. and. "PlayerUnknown's Battlegrounds" has a smaller development team than those games and has struggled to keep up with the demands of a massive community.

In November of 2017, "PUBG" was averaging 1.3 million players each day, according to SteamCharts, which tracks players on Steam, the most popular platform for PC games. The average number of daily "PUBG" players has since dwindled to about 450,000 over the last 30 days.

Still, "PlayerUnknown's Battlegrounds" remains one of the three most popular games on Steam by a wide margin, alongside "Dota 2" and "CounterStrike: Global Offensive."

With "PUBG" available on multiple platforms, players are wondering if Bluehole will be able to implement cross-platform play. Earlier this year "Fortnite" became the first game to offer cross-platform play between the Xbox One, PlayStation 4, Nintendo Switch, PC, and mobile devices. Bluehole has expressed interest in allowing cross-platform play in the past, but nothing has been confirmed.

"PlayerUnknown's Battlegrounds" is currently available on PC and Xbox One for $29.99. This hypothetical PS4 version will likely carry a similar price.

As part of finishing my degree (business with a concentration in management information systems), I'll be taking a programming class. My options are Java or C++ - what's the better way to go?

I'm a mid-career tech project manager/analyst who's finally decided to finish my bachelor's degree. It's a business degree with an MIS concentration, and I'll be taking a programming class as part of that. I have enough tech in my background (primarily web - PHP, Javascript, etc; also lots of SQL). that I'm not starting from scratch, but I'm also not looking to learn to code as a career. I've found that the little bit of knowledge I do have is super-helpful in working with developers - I can generally follow along in the code that they're gesturing at on the screen, and it definitely helps when talking through requirements, roadblocks, etc. Linear algebra especially helped.

Again, I'm not looking to learn to build anything so much as I'm looking to use this required course to boost my understanding of the people and projects I work with.

My "intro to programming" class options are C++ and Java. The articles I'm seeing are typically geared toward CS majors, but I'm not sure how much of that applies to me since this won't necessarily be a building block for other languages.

tl;dr: given the choice between C++ and Java for a business major with no intent to move on to deeper levels of programming, which is the better option?

I'm looking for books, papers, blog posts or anything else that deals with measurement in an environment where the data sources are not simply biased, but actively try to deceive the observer.
Is there research into this? If so, what are the key terms to look for? Details, examples below the fold.

I work as a BI developer in a big multinational software company. This is not the first time I'm working in BI and I am again struck with the same fundamental issues. Basically, I see Goodhart's Law in action all day long, coupled with management commiting McNamara's fallacy wherever possible. This got me thinking: is there any serious statistical, management, organizational or game theoretic research in an environment where the "data sources" are intelligent agents and may or may not fudge their numbers or try to influence the data collected according to their own incentives?

Examples I have in mind:

Sales reps hiding data from each other (in a CRM system) for fear of losing a deal to another salesperson

Managers inflating reported work, headcount requirements

Projects fudging RoI or other metrics

Emissions by german cars is also a good example :)

These are only from the top of my head.

Is there any research, if it's possible to draw inferences in such an environment? Any pointers are appreciated.

Or if the above problem can be rephrased in other terms, or can be shown to be analogous with some other field, that could be also useful. For example, if the business situations can be imagined as a task in counterintelligence (=trying to uncover deception from the opposing side) then perhaps techniques and tools from that field could be useful in the business context too. Or is that too far fetched?

The lead developer will be working on eTime and Total Absence Mgmt Time products - developing UI and backend services and integrating eTime and TAM with other...From Automatic Data Processing - Sat, 08 Sep 2018 17:33:04 GMT - View all Alpharetta, GA jobs

For decades, the United States has been in the midst of a housing crisis. Every day, individuals from each and every state struggle to find clean, safe homes and apartments that cost less than 30 percent of their total gross annual income.

The narrative is typically framed in terms of big urban areas, where housing and living costs are rising rapidly, and gentrification pushes long-time residents out of homes and neighborhoods they've lived in for decades. The same is true for suburban areas, who have experienced a tremendous growth in poverty. What few realize, however, is that affordable housing crisis also affects rural America.

A report published this month by the Urban Institute, a Washington D.C. think tank, highlights the problem by ranking the affordable housing needs of rural communities across the United States. The study, entitled Rental Housing for a 21st Century Rural America, analyzed 152 rural counties that have the most-severe affordable housing needs.

There were very specific areas that exhibited a particular need including the southern border from Texas to California, the Southern Mississippi Delta, and the southeast including Florida, Alabama, and Georgia.

Rural counties, which have been defined by the Urban Institute as those that qualify for U.S. Department of Agriculture housing programs, were ranked by seven factors. Those factors included high rates of population growth, high rates of poverty, low vacancy rates and high unemployment rates. Counties who exh ibitedfour or more of these factors were considered to be in severe need of affordable housing.

"It was surprising to see so many rural communities that are struggling with just having very few vacant rental units available, and to see so many rural communities with very few federally subsidized rental units," Corianne Scally, the author of the report and a senior research associate at the Urban Institute told the Huffington Post.

In comparing rural areas to more urban environments, you might expect the reasons for the housing crisis to be vastly different. However, some of the problems are by and large the same.

Today's tenants and aspiring homeowners have a laundry list of issues that can prevent them from finding affordable housing, including an increased cost of living, stagnating household incomes, and gentrification. Many wonder whether it's worth it to rent, or if it's in their best interest to buy. Others wonder if they're ready to purchase homes, given the state of the housing market. They also have to consider situations in which they could lose their housing, including government acquisition of property, eviction, and the threat of foreclosure when faced with financial hardship.

While there are some similarities, there are also significant differences between urban and rural problems with affordable housing.

"Housing issues in rural communities can get overlooked as living and housing costs tend to be lower there than in the cities," writes Huffington Post contributor Laura Paddison. "However, incomes in many of these areas tends to be lower too, especially in areas that used to rely on the coal industry or that otherwise have limited job opportunities."

It's not just stagnated wages that are an issue. Though America has had a recent economic boom and relatively low unemployment for years now, this hasn't translated into a house-building boom. In fact, rates of construction are at record lows even though the demand for affordable housing is high. In rural areas in particular, there are specific obstacles preventing developers from starting construction.

"Developers find it difficult to get financing there," David Dangler said in the same Huffington Post piece. Dangler is the director of rural initiatives at the nonprofit NeighborWorks America. He went on to say "The already limited number of firms specializing in affordable rental housing is even smaller when it comes to rural markets. Rural rental housing developments tend to be significantly smaller than their urban counterparts, and financing is complex."

Additionally, developers note that in rural areas, things like water system infrastructure, electricity, garbage, sewage, and other utilities may not be available in the same capacity that they are in urban environments.

One solution to making housing more affordable in these rural areas is through government assistance. Scally, of the Urban Institute, is calling on lawmakers to allocate more money into building affordable rental housing in these areas.

"We found almost 700 counties with rural communities that had equal to or less than 5 percent of federally subsidized units, so again there just hasn't been much investment in these communities even through standard federal programs," she said.

Another solution, the report suggests, is that incentives be given to developers who operate in these underserved markets, whether that be through technical assistance or financial rewards, since many of these developers work in areas devoid of adequate construction material and labor.

Should Democrats succeed in the midterm elections, there are a number of proposed bills that seek to rectify the affordable housing issue. A number of bills were introduced earlier this year by introduced by Democratic Senators Cory Booker and Kamala Harris.

The most recent bill was introduced in September 2018 by Senator Elizabeth Warren. The American Housing and Economic Mobility Act calls for a half-trillion dollar investment in affordable housing to be allocated over the next decade. This bill would create up to 3.2 million new units for low-and middle-income families. The bill also expands protections of legislation to reduce discrimination in banking, housing, and aims to desegregate neighborhoods.

Warren's bill, for example, would make it illegal for landlords to discriminate against renters with federal housing vouchers and would also impose new regulations on credit unions and nonbank mortgage lenders like Quicken Loans. The bill also aims to incentivize states to rectify any racist and discriminatory zoning restrictions; hopes to ease the path for low-income families to move into more affluent communities; and provides federal assistance to first-time homebuyers from formerly segregated areas and those who saw their wealth decline during and after the 2008 financial crisis.

"Much of the housing discussion has been about affordability, production, and tenant protections, which are all really important issues," Philip Tegeler, the executive director of the Poverty and Race Research Action Council tells The Intercept. "What's so powerful about Warren's bill is that it aims to tackle all those things, and it also looks at how are we going to structure our society going forward. Fair housing is really embedded in the legislation, and that's why I find it so creative."

These bills are all highly dependent on Democrats doing well in midterm elections, or at the very least bipartisan support. Regardless, amidst America's housing crisis, it's going to be important for governments, whether federal, state, or local to intervene.

Future carpenter Erik McIntosh was all smiles at Tuesday morning’s announcement from Bay of Quinte MPP Todd Smith that trade apprenticeship programs in Ontario will be much more accessible with the Making Ontario Open for Business Act. Smith was at Mercedes Meadows in Belleville’s east end with developer Eric DenOuden of Hilden Homes to introduce […]

This article provides an introductory guide to WP-CLI, a command-line tool that was created to make developers’ lives easier, allowing them to manage a WordPress site through the command line rather than through the usual admin interface.

WP-CLI was created by Daniel Bachhuber over a decade ago. Since then, it’s become an indispensable tool in every advanced WordPress developer’s arsenal — “deployed and relied upon by almost every major user of WordPress”, according to Matt Mullenweg. Since 2016, WP-CLI has been an official WordPress CLI tool.

WP-CLI is used for installing and setting up a WordPress website, changing its options, administering users, and a host of other things. It can be leveraged to significantly speed up developers’ workflows.

WP-CLI comes as a phar file — short for PHP Archive. It’s a standard for packaging multiple PHP files and other resources as a single application — for simpler distribution and installation.

Installation

WP-CLI presumes, obviously, that we have access to the system shell. This will be pretty straightforward on Linux and macOS systems — particularly on servers — as WordPress is served almost universally from Linux machines. If we have dedicated server hosting, or cloud hosting like AWS, Alibaba Cloud, etc., or if we’re using a VPS from Digital Ocean, Vultr, Linode and the like, SSH comes as a default access option, but these days many shared hosts offer SSH access options. (Some might even come with WP-CLI preinstalled.)

For Windows users, WP-CLI can be installed via Composer, but we recommend readers get themselves acquainted with Windows Subsystem for Linux, because it makes it possible to have a native Linux environment available, along with Bash, package manager like APT, etc. WordPress is a PHP app, and PHP’s native environment is Linux.

Now, upon typing the wp command, it displays all the options for us, and possible parameters. One caveat: if we’re running as the root user, we need to add --allow-root to our commands:

Now that we have it set up, we can explore the commands, and possible usage scenarios.

WP-CLI Commands

WP-CLI aims to offer a fast alternative to the WordPress web admin interface. There are chunks of code or functionality that offer simple, precise interfaces for performing complex tasks. Beside the bundled commands, WP-CLI defines an API for integrating third-party commands — WP_CLI::add_command(). These can be distributed either as standalone packages, or as part of WordPress plugins or themes.

In this guide, we’ll review bundled commands — those that come with the default WP-CLI installation — and some more notable third-party commands.

Commands can come as basic, one-argument commands like wp somecommand, or as subcommands under the base command namespace, such as wp somecommand subcommand.

wp core

The wp coresubcommand is a command/namespace that consists of sucommands that deal with WordPress core — so we can download, install, update, convert to multisite and get information about our WordPress core version:

wp core download will download the latest version of WordPress into the current directory

*Job Summary* The role being offered is one of the most crucial key positions in the company. Being a Technologies company that is 100% driven on innovation... ₹3,00,000 - ₹3,50,000 a yearFrom Indeed - Sat, 03 Nov 2018 12:47:33 GMT - View all Shiliguri, West Bengal jobs

Over the last year, our community has expressed an important need: aspiring JavaScript developers need access to simple, modern tools so they can build great-looking cross-platform enterprise applications more easily and quickly. Well you’ve spoken, and we’ve heard you! We’re excited to announce the release of Sencha Ext JS Community Edition (CE) that provides a core framework, hundreds of modern components, material theme, open tooling, and much more with a limited commercial use license for free.

GA-Alpharetta, Our client will add a seasoned developer to their team to support development of a new financial services application. Candidate must have excellent communication skills, Bachelor's degree and the ability to help drive the development of this application. Client will interview immediately. If interested, please send your Word resume to mwhitehead@genuent.net. Senior Java/JEE Developer - Contract t

Web applications Developer Head Office Situated in Randburg area - But work from home R50000 to R60000ctc Design &Develop Web Applications (start to finish) for the business and ensure seamless integration with other systems Ensure Web Applications ......

British computer scientist Tim Berners-Lee, who invented the World Wide Web, appealed on Monday for companies and governments not to leave behind half of the world population yet to have internet access, which includes billions of women and girls. Berners-Lee told the opening of the Europe’s largest technology conference that everyone had assumed his breakthrough [...]

FL-Anna Maria, HVAC, Maintenance Technician, Handyman, Maintenance, Resort, Hotel Your new company This company is an owner-developer of major hotels and offices in Florida and other US markets. They acquire, develop and manage hotels, lodging properties and temporary furnished residences in the Southeast and the Caribbean. They truly have a family atmosphere and invest in their employees! Your new role You'll b

FL-Sarasota, Estimating position with an established Retail Developer with projects throughout the United States. Client Details With over 50 years in business, this privately held real estate company owns and manages a portfolio of retail, office, industrial, hotel and residential properties across the United States. Description The Project Estimator is tasked with accurately estimating project cost budgets a

Serious marketers are making it a priority to attend THE premiere search marketing conference in 2019: Search Engine Land’s SMX® West, January 30-31 in San Jose. Register by November 17 to lock in Super Early Bird rates, our lowest rates available. Hurry, this offer expires soon! Proven SEO & SEM tactics to stay ahead Even […]

Serious marketers are making it a priority to attend THE premiere search marketing conference in 2019: Search Engine Land’s SMX® West, January 30-31 in San Jose. Register by November 17 to lock in Super Early Bird rates, our lowest rates available. Hurry, this offer expires soon! Proven SEO & SEM tactics to stay ahead Even […]

Please Note Hours per week may be variable based on work load. In addition to standard application, applicants must also complete the supplemental form at... $14 an hourFrom University of Wyoming - Sat, 06 Oct 2018 09:48:58 GMT - View all Laramie, WY jobs