The business organizations around the world now go for the dedicated Windows hosting services. A good number of reasons are there to point out why this kind of popularity has been constructed around the Windows dedicated servers. Nonetheless, it is important to mention the significant ones. Many can point out at the supreme efficiency provided […]

I haven’t had a ton of time for blogging lately but figured this tip was good enough to throw out there for all the screen users. One way I like to organize servers that I’m ssh’d into is using screen windows. As you hopefully know you can use Ctrl-A c to create sub windows within […]

I agree with Mats seems like the agent is not able to create or access any files or folders on disk. From doing quick research, seems like this is an issue with windows if Null registry was corrupt:...

"KickStart Software for the PC enables quick test setup and data visualization when using one or more instruments.

Key Features
• Save time by automating data collection of millions of readings.
• Set up a multi-instrument test with the ability to independently control up to eight instruments.
• Supports power supplies, source measure unit (SMU) instruments, DMMs, and dataloggers.
• Replicate tests quickly using saved test configurations.
• Use built-in plotting and comparison tools to quickly discover measurement anomalies and trends.
• Installs with a 60 day free trial.

"""KickStart Software for the PC enables quick test setup and data visualization when using one or more instruments. Key Features• Save time by automating data collection of millions of readings.• Set up a multi-instrument test with the ability to independently control up to eight instruments.• Supports power supplies, source measure unit (SMU) instruments, DMMs, and dataloggers.• Replicate tests quickly using saved test configurations.• Use built-in plotting and comparison tools to quickly discover measurement anomalies and trends.• Installs with a 60 day free trial.Changes in Version 2.0.5• New Models 2000, 2010, 2100, 2110 added• Added ability to export in excel format• Enhanced auto-export to export data every two minutes • Misc. problems corrected."""

What’s the worst malware so far into 2018? The worst botnets and banking trojans, according to Webroot, were Emotet, Trickbot, and Zeus Panda. Crysis/Dharma, GandCrab, and SamSam were the worst among ransomware. The top three in cryptomining/cryptojacking were GhostMiner, Wanna Mine, and Coinhive.

And included in the list of top 10 threat actors so far this year, we find Lazarus Group, Sofacy and MuddyWater coming in the top three spots, according to AlienVault. Lazarus Group took the top spot from Sofacy this year. The reported locations for the top 10 threat actors are North Korea, with two groups; Russia, with three groups; Iran, with two groups; China, with two groups; and India, with one. Microsoft Office was the most exploited application, but Adobe Flash, WebLogic, Microsoft Windows, Drupal and GPON routers were also listed in the top 10.

Macs are generally more secure than PCs, thanks to a more secure operating system where certain aspects of its software are more locked down and harder to infiltrate by rogue software. Also, fewer people own Macs meaning fewer targets for criminals. However, that doesn't mean that it's impossible to get a virus on your Mac, or receive a suspicious piece of malware either.

A report from Malwarebytes found that Mac malware has seen an increase of over 270% between 2016 and 2017. That number is likely to rise in 2018 with new threats like OSX.MaMi and Dark Caracal cited in the article as significant ways of disrupting Mac owners. The more Macs are used, the more they’ll be targeted by cyber criminals. And everyone loves the must-have MacBook of the moment, right? Read more...

I have an olive oil society The script (ideally in powershell or python) can import a .ldif file in ldap (Apache Directory Studio 2.0) It should be at the output of the script a log of success or error... (Budget: €250 - €750 EUR, Jobs: Linux, Powershell, Python, Shell Script, Windows Server)

jv16 PowerTools is a PC System Utilities Software designed to make your computer work fast and smoothly. It works by cleaning out unneeded files and data, cleaning the Windows registry, automatically fixing system errors and applying optimization to your system. The Clean and Speed Up My Computer is the most used tool included in jv16 PowerToools. Using this tool you can automatically identify, list and fix system errors as well as increase Windows startup speed by applying optimization to any other startup enabled program present in your system. jv16 PowerTools is a software toolkit designed simply to make Windows fast. It also comes with On/Off switches that can help when looking to adjust Windows privacy settings. Most of our customers buy jv16 PowerTools for its ability to clean and optimize their computers. Using its Clean and SpeedUp My Computer tool, anyone can improve the performance of their Windows computer.

Specialized in converting both iOS and Android Live Photo files to different popular formats, such as GIF, JPG, TIFF, BMP, PNG and WEBP, Joyoshare LivePhoto Converter for Windows is regarded as the best professional conversion tool. It delivers a thoughtful platform for you to handily preview any file in real time and selectively choose whatever you need as you wish. Furthermore, this brilliant software also has ability to adjust resolution and quality in flexible way and keep original aspect ratio to output high-quality photos. By means of it, all of your Live Photos can be converted in single and batch mode without sparing any effort. Key Features of Joyoshare LivePhoto Converter for Windows: 1. Excellent Live Photo converter for Windows; 2. Easily convert Live Photos to JPEG, GIF, PNG, BMP, WEBP, and TIFF; 3. Allow to flexibly convert Live Photos individually and in batch; 4. Support adjusting quality, resolution and aspect ratio; 5. Preview and edit Live Photo and all frames in real time.

It's not hard to find news stories about waste, fraud, abuse and downright theft in the school privatization sector.

For years, the policy window for privatizing public schools has been wide open, and what was once considered an extreme or at least rare idea—such as outsourcing public schools to private contractors with few strings attached, or giving parents public tax money to subsidize their children’s private school tuitions—has become widespread as charter schools are now legal in all but a handful of states, and voucher programs have proliferated in many forms across the country.

Politicians of all stripes have been extremely reluctant, especially at the national level, to lean into a real discussion of the negative consequences of redirecting public education funds to private operators, with little to no regulation for how the money is being spent. Candidates have instead stuck to a “safe boilerplate” of education being “good” and essential to “the workforce” without much regard to who provides it.

But policy windows can be fleeting (remember “the deficit crisis”?), and multiple factors can rejigger the public’s views. Indeed, in campaigns that candidates are waging in the upcoming midterm elections, one can see the policy window on school privatization gradually shifting back to support for public schools and increasing skepticism about doling out cash to private education entrepreneurs.

‘Vulture Schools’

It is the wave of new progressive candidates who appear to be the ones who are shifting the policy window on school privatization.

Take the campaign of progressive superstar Randy Bryce, running for the congressional seat Paul Ryan held in Wisconsin. The Badger State recently expanded statewide a voucher program that was confined to Milwaukee and Racine, and charter schools have expanded significantly under the leadership of Republican Governor Scott Walker.

On his website, Bryce provides the usual bromides about “every child deserves a quality education” and “charter, private and traditional public schools can all thrive,” but he then adds the curious statement that “no student should see money taken from their classroom in order to serve another.” What does that mean?

Click through the “learn more” prompt and you’ll watch a video in which he makes a much stronger statement about the problems of privatizing public schools. “We can’t afford two school systems, a public one and a private one,” he elaborates, and he blasts “vulture schools that don’t have the same accountability and don’t have the same rules.”

The example of the school he brings up that closed after head count day, and the owners “moved to Florida,” is a real school run by a husband-and-wife team who abruptly closed their Milwaukee private school, after taking more than $2.3 million of state voucher money, and moved to Florida to start another one.

The latest scandal breaks from Arizona, where the state auditor found that parents who used the state’s voucher-like education savings program spent more than $700,000 on cosmetics, music, movies, clothing, sports apparel, and other personal items. Some even tried to withdraw cash with the state-issued debit cards. The state has not recovered any of the money. But the state legislature recently passed a bill to expand the voucher program, which is now being challenged by a recall effort on the ballot on Tuesday.

Earlier this month, in Florida, the founder of a company that operated charter schools in seven counties was found guilty of using those schools to steer millions of dollars into his personal accounts. In one school district alone, “nearly 1,000 students were affected by the chaos and disruption that ensued.”

In California, a recent audit of a charter school found the married couple who ran the school made almost $850,000 in less than two years and secretly hired people and created positions without approval from the school’s board.

A video from Florida that went viral shows an African American boy being denied admission to a private school that his parents used public school voucher money to enroll him in. An enormous white cross adorns the school’s front lawn. This and other similar occurrences of discrimination by voucher-funded private schools in the Sunshine State has prompted the NAACP to call for an investigation into all private schools accepting vouchers. Around the same time, an op-ed appeared in a Florida newspaper recounting the scandal of a voucher-funded private school that stiffed teachers and skipped rent payments. Teachers filed formal complaints about a “lack of basic school supplies,” academic “irregularities,” student safety concerns, and inadequate staffing. But when the school was evicted, it simply moved to a new location and started the whole flimflam all over again.

In Georgia, a police investigation of a charter school found the governing board terminated the school’s leader, made no public announcement of the firing, and never told parents why. At another Georgia charter school, parents were told to “watch your bank accounts” after 6,000 school records were mysteriously transferred to a personal email account.

In Nevada, an analysis of the state’s charter school industry found they increase racial and economic segregation by enrolling far fewer low-income kids and far more white and Asian students than public schools do. A state audit of a charter school in New Mexico found tens of thousands of dollars have been stolen by the school’s employees.

Some Regulatory Control, Please

One doesn’t need to “cherry pick” to find news stories about waste, fraud, abuse, and downright theft in the school privatization sector. The above examples all happened within the last month.

Of course, financial scandals happen in public schools too. That’s why they’re heavily regulated. But the notion that “parent choice” can keep charter schools and private voucher schools clean and honest is disproven nearly every day.

In Washington, D.C., there now seems to be an inkling to address the mountain of fraud created by charter schools and voucher programs. Prompted by a massive scandal involving an online charter school in Ohio, Democratic senators want the top watchdog agency for the federal government to investigate the business practices of online charter schools.

Their investigations can’t stop there. A recent analysis of states with the most charter schools and the most charter closures finds the federal government dumps millions into these schools but provides little oversight and guidance for what to do when these schools close, leaving millions of dollars in taxpayer money at risk to scamming.

More Progressive Democrats Against Privatization

The endless revelations of corruptions in the charter school and school voucher racket are now what’s driving policy, more so than dry, empirical studies about whether privatizing public schools “works” academically.

You can see that especially in the campaigns of progressive standouts like Andrew Gillum, who is running against Ron DeSantis to be the next governor of Florida, a state that is rife with charter school and voucher scandals.

While members of the family of U.S. Secretary DeVos are bankrolling the DeSantis campaign to push their agenda for charters and vouchers, Gillum is determined to stanch the flow of public dollars to the state’s many voucher programs and make charters more accountable for how they spend public money.

A review compiled by the Intercept of progressive candidates running for Congress singles out Leslie Cockburn running in Virginia’s 5th Congressional District. Cockburn opposes school vouchers and shows her skepticism for charter schools by noticing their problems with teacher turnover and their lack of oversight. At a recent meet-and-greet, she said, “We want more funding for public schools, not less. We need to not take away funds from public schools and give them to charter schools or private schools.”

Another candidate, Kara Eastman, running in Nebraska’s 2nd Congressional District, says on her website, “We must resist the administration’s political nominees who advertise the benefits of expanding charter schools.”

A candidate endorsed by the Progressive Change Campaign Committee (PCCC), Dana Balter, running in New York’s 24th Congressional District, is a special education teacher turned Syracuse professor who got the endorsement of the powerful state teachers’ union largely because she “understands that giving money away to charter schools is not the right approach.”

Candidates for state houses have similar positions. In the race for West Virginia Senate 1st District seat that pits Democrat William Ihlenfeld against Republican Senate Majority Leader Ryan Ferns, Ihlenfeld says, “I am not a supporter of charter schools … I don’t think charter schools are a good idea for West Virginia. I don’t think we can afford to allow the private sector to come in and profit from precious education resources.”

Of course, some progressive candidates still stick to the old script of “investing in schools” with little regard to who runs them, and a few still cling to the school privatization cause. But the trend that made privatizing public schools an acceptable if not preferential policy has at least stalled, if not completely been thrown into reverse.

(EMAILWIRE.COM, November 07, 2018 ) Global PVC Windows market size will increase to XX Million US$ by 2025, from XX Million US$ in 2017, at a CAGR of XX% during the forecast period. In this study, 2017 has been considered as the base year and 2018 to 2025 as the forecast period to estimate the market...

It is 2018 and this error message is a mistake from 1974.This limitation, which is still found in the very latest Windows 10, dates back to BEFORE STAR WARS. This bug is as old as Watergate. pic.twitter.com/pPbkZiE57t

58478
A village stone house, good general condition, with a nice terrace and great view. Some renovation works to complete such as electricity, heating, paint and plumbing. New PVC windows. A fire place can be created. On the ground floor : spacious...
4 rooms1 bathroommortgageheatingfitted kitchenterraceinternet

Your new company Hays IT is recruiting for a System Architect to work for a growing health technology organisation in Cornwall. You will be contributing to the designing of technical teams while coaching and mentoring individuals in different technical departments. As a leader, you will be maintaining a continuous strategy of personnel improvement and monitoring progress and results against set targets. Your new role The successful System Architect will be working within the Microsoft technology stack, utilising your in-depth programming expertise and SQL knowledge on a variety of systems, from websites, Windows clients or mobile devices through to high volume and available SQL databases. There is a large quantity of legacy software and the aim is to develop using up to date technologies without compromising the existing functionality. What you'll need to succeed Commercial development experience in C++. Responsibility for the development of quality software products including requirements capture, analysis, design, build and deployment. Expert knowledge of C++, OOP and SQL. NHS experience. Knowledge of Agile working and development. What you'll get in return £45,000 - £55,000 per annum. Based in Cornwall. 28 Days annual leave (Inc. Bank Holidays) increasing to 33. Pension scheme. Health Care Plans. Salary sacrifice bicycle schemes. What you need to do now If you're interested in this System Architect role in Cornwall, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this System Architect job isn't quite right for you but you are looking for a new position, please contact us for a confidential discussion on your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk

70439
Semi-Detached Property With 4 Bedrooms in good decorative order with Courtyard Parking, Rear Gardens and a Recent Modern Kitchen. This property is ideal as a holiday home, just enough land to cope with and amazing views from the rear windows and...
4 rooms2 bathroomsmortgageheatingfitted kitchenfurnishedgardenparkingshowerinternet

If you want to build web apps in a very short amount of time using python, then Flask is a fantastic option.

Flask is a small and powerful web framework (also known as “ microframework ”). It is also very easy to learn and simple to code. Based on my personal experience, it was easy to start as a beginner.

Before this project, my knowledge of Python was mostly limited to Data Science. Yet, I was able to build this app and create this tutorial in just a few hours.

In this tutorial, I’ll show you how to build a simple weather app with some dynamic content using an API. This tutorial is a great starting point for beginners. You will learn to build dynamic content from APIs and deploying it on Google Cloud.

The end product can be viewed here .

To create a weather app, we will need to request an API key from Open Weather Map . The free version allows up to 60 calls per minute, which is more than enough for this app. The Open Weather Map conditions icons are not very pretty. We will replace them with some of the 200+ weather icons from Erik Flowers instead.

This tutorial will also cover: (1) basic CSS design, (2) basic HTML with Jinja, and (3) deploying a Flask app on Google Cloud.

The steps we’ll take are listed below:

Step 0: Installing Flask (this tutorial doesn’t cover Python and PIP installation)
Step 1: Building the App structure
Step 2: Creating the Main App code with the API request
Step 3: Creating the 2 pages for the App (Main and Result) with Jinja , HTML, and CSS
Step 4: Deploying and testing on your local laptop
Step 5: Deploying on Google Cloud.
Step 0 ― Installing Flask and the libraries we will use in a virtual environment.

We’ll build this project using a virtual environment. But why do we need one?

With virtual environments, you create a local environment specific for each projects. You can choose libraries you want to use without impacting your laptop environment. As you code more projects on your laptop, each project will need different libraries. With a different virtual environment for each project, you won’t have conflicts between your system and your projects or between projects.

Run Command Prompt (cmd.exe) with administrator privileges. Not using admin privileges will prevent you from using pip.
(Optional) Install virtualenv and virtualenvwrapper-win with PIP. If you already have these system libraries, please jump to the next step.
#Optional
pip install virtualenvwrapper-win
pip install virtualenv
Create your folder with the name “WeatherApp” and make a virtual environment with the name “venv” (it can take a bit of time)
#Mandatory
mkdir WeatherApp
cd WeatherApp
virtualenv venv
Activate your virtual environment with “call” on windows (same as “source” for linux). This step changes your environment from the system to the project local environment.
call venv\Scripts\activate.bat
Create a requirements.txt file that includes Flask and the other libraries we will need in your WeatherApp folder, then save the file. The requirements file is a great tool to also keep track of the libraries you are using in your project.
Flask==0.12.3click==6.7gunicorn==19.7.1itsdangerous==0.24Jinja2==2.9.6MarkupSafe==1.0pytz==2017.2requests==2.13.0Werkzeug==0.12.1
Install the requirements and their dependencies. You are now ready to build your WeatherApp. This is the final step to create your local environment.
pip install -r requirements.txt
Step 1 ― Building the App structure

You have taken care of the local environment. You can now focus on developing your application. This step is to make sure the proper folder and file structure is in place. The next step will take care of the backend code.

Create two Python files (main.py, weather.py) and two folders (static with a subfolder img, templates).
Step 2 ― Creating the Main App code with the API request (Backend)

With the structure set up, you can start coding the backend of your application. Flask’s “Hello world” example only uses one Python file. This tutorial uses two files to get you comfortable with importing functions to your main app.

The main.py is the server that routes the user to the homepage and to the result page. The weather.py file creates a function with API that retrieves the weather data based on the city selected. The function populates the resulting page.

The HTML pages weather and result are the one the backend main.py will route to and give the visual structure. The CSS file will bring the final touch. There is no javascript in this tutorial (the front end is pure HTML and CSS).

It was my first time using the Jinja2 template library to populate the HTML file. It surprised me how easy it was to bring dynamic images or use functions (e.g. rounding weather). Definitely a fantastic template engine.

At this stage, you have set up the environment, the structure, the backend, and the frontend. The only thing left is to launch your app and to enjoy it on your localhost.

Just launch the main.py with Python
python main.py
Go to the localhost link proposed on cmd with your Web Browser (Chrome, Mozilla, etc.). You should see your new weather app live on your local laptop:)
Step 5 ― Deploying on GoogleCloud

This last step is for sharing your app with the world. It’s important to note that there are plenty of providers for web apps built using Flask. Google Cloud is just one of many. This article does not cover some of the others like AWS, Azure, Heroku…

If the community is interested, I can provide the steps of the other cloud providers in another article and some comparison (pricing, limitations, etc.).

To deploy your app on Google Cloud you will need to 1) Install the SDK, 2) Create a new project, 3) Create 3 local files, 4) Deploy and test online.

Install the SDK following Google’s instructions
Connect to your Google Cloud Account (use a $300 coupon if you haven’t already)
Create a new project and save the project id (wait a bit until the new project is provisioned)
Create an app.yaml file in your main folder with the following code:
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /static
static_dir: static
- url: /.*
script: main.app
libraries:
- name: ssl
version: latest
Create an appengine_config.py file in your main folder with the following code:
from google.appengine.ext import vendor

There are ton of python web frameworks and Flask is one of them but it is not a full stack web framework.

It is “a microframework for Python based on Werkzeug , Jinja 2 and good intentions.” Includes a built-in development server, unit tesing support, and is fully Unicode-enabled with RESTful request dispatching and WSGI compliance .

Installation

To install flask you can go here or just follow below steps:

Step1: Install virtual environment

If you are using Python3 than you don't have to install virtual environment because it already come with venv module to create virtual environments.

If you are using Python 2, the venv module is not available. Instead, install virtualenv .

If you needed to install virtualenv because you are on an older version of Python, use the following command instead:

virtualenv venv

On Windows:

\Python27\Scripts\virtualenv.exe venv
Activate the environment

Before you work on your project, activate the corresponding environment:

. venv/bin/activate

On Windows:

venv\Scripts\activate

Your shell prompt will change to show the name of the activated environment.

Step 3: Install Flask

Within the activated environment, use the following command to install Flask:

$ pip install Flask

Flask is now installed:Check out the Quickstart or go to the Documentation .

Create a applcation

So, let's build the most simplest hello world application.

Follow these steps:

As, you are already present in the myproject folder. Create a file `hello.py' and write the below code.

Import the Flask class. An instance of this class will be our WSGI application.

from flask import Flask

Next we create an instance of this class. The first argument is the name of the application’s module or package. If you are using a single module (as in this example), you should use name because depending on if it’s started as application or imported as module the name will be different (' main ' versus the actual import name). This is needed so that Flask knows where to look for templates, static files, and so on.

app = Flask(__name__)

We then use the route() decorator to tell Flask what URL should trigger our function.The function is given a name which is also used to generate URLs for that particular function, and returns the message we want to display in the user’s browser.

@app.route('/')
def hello_world():
return 'Hello, World!'

Make sure to not call your application flask.py because this would conflict with Flask itself.

To run the application you can either use the flask command or python’s -m switch with Flask. Before you can do that you need to tell your terminal the application to work with by exporting the FLASK_APP environment variable:

$ export FLASK_APP=hello.py
$ flask run
//Or you can use
$ export FLASK_APP=hello.py
$ python -m flask run
Go to http://127.0.0.1:5000/ to see your project running.

Contacts are the most important part of an e-mail application. Without knowing the e-mail address (contact) we cannot send the mail. So, if you have migrated from one e-mail client to another e-mail client then it is extremely important transfer contacts. To export contacts from Windows Live Mail to Outlook we can use the import/ export method.
Transfer of EML contacts to Outlook PST consist of two steps
Export Contacts from Windows Live Mail to CSV
Import CSV to Outlook PST
Follow the below g Read More..

Xamarin, as you all know, is a framework which helps mobile app developers to develop different kinds of mobile applications for iOS, Android and Universal Windows Platform. It is usually written in XAML or C# languages. As the languages like Java, Swift and Objective C are easy to learn, developers find Xamarin easy to learn. It is also possible for the developers to use visual studio set up and code cross-platform application in XAML or C#.As the code of C# is compiled into native code, your Read More..

Extremely well-maintained and up-to-date manufactured home with low maintenance landscaping. Tall, coffered ceilings throughout with 5 skylights. Kitchen has premium granite counter tops, island, top of the line appliances, and an abundance of storage. 2 dining rooms including a separate dedicated dining area area. Newer flooring throughout the main living areas. Lennox A-C system installed in 2014. Jetted bath-tub, high ceilings, and skylight in Master suite. Vinyl double-pane windows throughout. Carport and large storage shed w/electricity on 1 side of home and large protected patio on the other side with plenty of room for patio furniture. All this in the quietest corner of this exceptional Creekside Estates 55+ park conveniently located in Phoenix which is close to Talent and Ashland.

This partially Furnished 2/2 home has plenty of windows, a dedicated dining area with built-ins, large bedrooms and extra entertaining space on the screen porch. There's plenty of storage for all of your tools, crafts, toys and more in the walk-in closets and attached storage shed. Perfect for someone looking to relocate to the warm, sunny Florida! Priced to sell, this home won't last long! Call Us for Your Showing! The Club house/Pool is located right across the street from this home! This home and Community is located perfectly in Lakeland. It's minutes from the the downtown area, restaurants, doctor offices and also a the Hospital. Come see what Woodbrook Estates has to offer! There are so many amenities, from our two Community Centers, Shuffleboard Courts, Horseshoe pits, heated pool, hot bathtub, fitness room and a billiards room. Come see what 55+ living has to offer! Whether you’re looking to relax or get active, Woodbrook Estates is ready to serve you! CALL US NOW!

Save time, empower your teams and effectively upgrade your processes with access to this practical 64-Bit Windows Server Toolkit and guide. Address common challenges with best-practice templates, step-by-step work plans and maturity diagnostics for any 64-Bit Windows Server related project. Download the Toolkit and in Three Steps you will be […]

This class combines Combined Knowledge’s “Site Member” and “Site Owner” courses to provide a comprehensive training program for power end users. This course presents thorough coverage from the ground up about how to use, operate, and build sites in a Microsoft Office SharePoint Server 2007 (MOSS 2007) environment. Students 1st learn about site navigation and data storage and retrieval through Instructor-led modules covering topics such as search and effective use of lists and libraries. Building on this foundation, students dive deeper into site administration, learning how to create and manage sites, lists, libraries, views and workflows. Security and rights administration are also covered. Functional concepts and best practices are interwoven into the modules to provide a framework for the topics. *Note - This course is oriented to Microsoft Office SharePoint Server, but is also applicable to Windows SharePoint Services 3.0.

African ladyboy mos (hormone.tits small.tits shaved teen afro-haircut dark.skin skinny) meets a dude in a bedroom. they start already nude. so no strip. ;) mos blows the dudes prick, so does the guys for mos. then they fuck around in dif. positions (pooch, missionary, prone bone, rev. cowgirl). the scene ends with a cumshot on mos little titties.

After sucking each others cock and Daddy Mugs rimming his hot ass it was time for more. Julian is not a bottom boy but for Daddy Mugs he was. It took Daddy Mugs a little time and alot of lube to get his fat cock in his asshole but he did and boy did it feel great. He fucked him than let him hop on his cock and ride it and than he finished him off with him on his back and him pulling out and shooting a nice load of cum on his stomach.

Interspecific brothel for male monsters who come for the sake of human girls.

Working as one of the exclusive monster prostitutes,Hal is one of the divine sex goddesses. Incredibly insatiable and frivolous,she assists dark souls!

However, sex with monsters on their terms is not an easy task. Abnormal conceptfossil sexification, fusion, liquefaction, distortion, demonization ...

When these creatures give free rein to their crazy desires, their partner gets much more,than their usual effects. Despite this, Hal happily takes it all without too much thought,and that is why she is their bright ray of hope .. like a brave angel of mercy.

Position Objective: The IT / Support Help Desk Assistant provides end-user support on all software applications as well as support for all internal and external computer and applications under the direction of the IT Manager. IT support and Customer service/ support are main duties for this position.

Provides Help Desk and technical support for assisting end users with their day-to-day technical duties and issues, as well as provide IT related duties internally for the branch office. Is responsible for supporting the Electronic Medical Record and Practice Management applications in both Server based and Remote Hosted environment. Provides guidance and limited training for end-users during Help Desk support functions.

Work closely with IT Manager, Desktop/Network/Applications staff to perform IT Help Desk and operations procedures according to policies and guidelines.

Utilize and follow desktop management and support for issues resolution.

Receive, record, and resolve support calls during normal business hours and on-call hours if applicable.

Install new software drivers and patches; troubleshoot applications as assigned.

Provide user training and customer orientation when necessary.

Maintain high levels of competency with regard to these areas: Medical Software applications supported by company and operations support; End-user functions and workflows; access and security; troubleshooting methods and skills; customer service; Help Desk level support for peripherals, desktop, Network, and support of applications that are used in the organization.

Must have a strong team mentality; the ability to work positively in a team environment with the ability to interact favorably with people, and work effectively as a team member, in both remote and internal office environment.

Ability to communicate in interpersonal and technical levels both verbally and in writing.

Excellent organizational skills.

Ability to read, write, and communicate in English.

Education:

Bachelor's degree in Computer Science, Information Systems, or a related field and at least one - two years of relevant subject matter experience (job experience in IT help desk, operations, and technical support experience or High School diploma/Associate's degree and at least two - four years of relevant subject matter experience.

A technology tutorial website is filling a position for a Telecommute Technology Writer.
Core Responsibilities of this position include:
Explaining complicated stuff and make it easy for anyone to follow
Writing regular posts on a long-term basis, at least 1-2 500-800 words articles every week
Writing tutorials for various operating systems such as Windows, Mac and Linux, Mobile OS (iOS and Android)
Required Skills:
Must have a good working knowledge of the WordPress blogging platform
Must be able to take cool screenshots and turn them into images that can be use in articles
Must be able to participate actively to readers comments for all your published posts

Description: Haruka's home was destroyed by the evil institution Yaku. When she grew up, she decided that the mind would take revengeJakou Now she is a pilot of two-legged robots in the League of Civil Justice. Since it is owned by a private company, it sometimes takes part in PR activities, which sometimes include erotic acts.

The Ballston Spa Business & Professional Association (BSBPA) is once again encouraging all businesses in the downtown area to deck the halls! For the past few years, the BSBPA has recognized businesses whose window decorations and displays have enhanced the Village of Ballston Spa during the holiday season. Businesses with decorated windows whose lights are... Read more »

The history of an ancient castle provides the backdrop for this impressive, Oscar-winning director Michel Ricaud’s elaborately staged movie with an excellent cast. Former military (and sexual..) knight Hubert de Pognac must determine the reasons for obtaining a base for a campaign that is a palatial residence and now ideal resort for tourists. Even if it is haunting the ruins: SM-training, gorgeous anal gallops, rich sperm cascades and Co. provide fabulous disposals. Because of the noble crusade Hubert simply means: total orgy!

Spanking was part of life for so many girls and we ask them to tell us about it in our intimate spanking interviews. How did it feel to be in trouble again, getting a bare bottom spanking. Our beautiful models tell their spanking stories to Clare Fonda and then we act out a domestic spanking scene. The panties come down, the bottoms get red, and tears are cried. We also work with models who have never been spanked and are curious about what corporal punishment feels like. Spanked Sweeties has brought us many of the newest spanking models and contains 2,000 video clips so you can see them with an in depth interview followed by an old fashioned spanking.

Busty Tricia wanders over to juice sperm by giving a gorgeous blowjob to a guy, he cums in her mouth, after which Tricia cheerfully swallows his cum shot, but she is not satisfied with this and still 5 soup to the insatiable chick.

Tricia continues to verbally orally in order until these cables quench her thirst.null

Lewd Sibling Duo features two main protagonists, boy  a prominent young biologist, and his older friend  a college professor. On a quest to rid the population of a terrible infertility plague that threatens the very existence of humanity, they discover that their solution has some interesting side effects on women.

Combining the broad reach of Windows, best-in-class developer tools, a re-imagined user experience, and a built-in store, Windows 8 is the largest developer opportunity — ever.

Are you ready?Then join us for this free, full-day event filled with coding, sharing, plenty of food, and perhaps the occasional Lightning Talk on topics determined by your apps and questions.

FAQs

What is a hackathon?

These hackathons are a really fun way to get “down and dirty” with the technology and experience development along side others in the same room. It's an open Windows 8 code fest, where you’ll put what you know into practice and be eligible to win some great cash prizes! Code to your heart’s content, with Windows 8 experts available to guide you through every step of the process. It’s the perfect opportunity to get your dream application underway, or to finish that app you’ve already started.

What do I need to bring to the event?

You will need to bring a photo ID, your registration, a computer with Windows 8 and Visual Studio Express 2012 for Windows 8 (or any of the commercial editions of Visual Studio 2012), and your Windows 8 app idea (or a partially completed app, if you have one).

What are the prizes?

We have three cash prizes:

First place is $1000.00

Second place is $500.00

Third place is $250.00

Winners will be responsible for taxes (if any) and you must be present to win.

Who are the judges?

Judging will be performed by a panel of 3 judges (still being determined) and will be based application completeness and how well the application follows the Modern UI principles.

Who are the sponsors?

We wouldn't be able to host this event without our corporate sponsors. They are providing us everything from food to prize money.

(Please note that there is limited space available for this event, so be sure to register early.)

A new book on WCF was just published by Juval Lowy at IDesign. For those of you that don't know, Juval is Microsoft's Regional Director for the Silicon Valley area and has helped in the internal strategic design reviews for the .NET Framework. He has presented sessions at the last two Tech·Ed conferences on WCF and helped shape the technical strategy and direction for WCF with Microsoft.

I haven't picked up my copy yet, but will be getting one soon. The book focuses on the "why" behind particular design decisions in WCF and is a practical approach to building WCF enabled services.

There is also a new "Rough Cuts" edition of Learning WCF available by Michele Leroux Bustamante (also at IDesign). This book is aimed at the WCF beginning to intermediate programmer and focuses on the actual transmission (what happens on the wire) and interoperability techniques, while Juval's book is aimed at more advanced developers and focuses on the system side of developing WCF applications.

In any case, both of them should be good to add to your library. I know I will be adding them to mine.

NJ-Troy Hills, Job Description We are Peak Systems, a technology staffing and managed services consulting firm connecting technical consultants with various industry opportunities. Technicians who join us may receive new certifications for working with our clients; we issue payments weekly, offer direct deposit, and have many nationwide opportunities. We are currently seeking a Windows10/MAC Deployment Support T

NJ-Troy Hills, Job Description We are Peak Systems, a technology staffing and managed services consulting firm connecting technical consultants with various industry opportunities. Technicians who join us may receive new certifications for working with our clients; we issue payments weekly, offer direct deposit, and have many nationwide opportunities. We are currently seeking a Windows10/MAC Deployment Support T

Spanking was part of life for so many girls and we ask them to tell us about it in our intimate spanking interviews. How did it feel to be in trouble again, getting a bare bottom spanking. Our beautiful models tell their spanking stories to Clare Fonda and then we act out a domestic spanking scene. The panties come down, the bottoms get red, and tears are cried. We also work with models who have never been spanked and are curious about what corporal punishment feels like. Spanked Sweeties has brought us many of the newest spanking models and contains 2,000 video clips so you can see them with an in depth interview followed by an old fashioned spanking.

It is really amazing but Phoenix’s hubby got used to her stunning beauty and does not get excited so much any more. As a very good wife, Phoenix hides her disappointment from him but, luckily, she has a boyfriend who is ready to fuck the nasty blondie as hard as possible.

That day his wife caught them fooling around and decided to join the excited couple and to teach a hottie to satisfy her man to the full.

Vicious Master Kirk returns to Brutal Tops to angrily damage this feeble sub. The two guys find themselves banged up in a tiny prison cell. The dominant, well-hung top takes the opportunity of being locked up and alone with this runt to thrash him with his belt and order him to open up his arsehole. Then the Master power-pummels the runt«s hole with a massive dildo which is attached to a power-tool. The agony on the sub»s face is apparent at the Master screams orders at him before pissing all over him. Finally, the sub has to suck his Master's dick before collapsing to the floor.

Size: 566 MBCensorship: To is Absent/eat a patch for removalDeveloper/publisher: TemptationXXXPlatform: PC/Windows/MacOSEdition type: In developmentTablet: It isn't requiredVersion: Ep1 v1.0Language of a game (plot): EnglishInterface language: English

I'm in a business where the owner has business units in different industries, and I am looking to find a network architect to review and evaluate the current set up, create a plan for a new system design with different pricing / timeline options and give suggestions on vendors to use for each... (Budget: $50 USD, Jobs: Cisco, General Labor, Network Administration, System Admin, Windows Server)

Microsoft has announced it is working to port tools from its Sysinternals utility suite to Linux. The suite of applications has gained an almost legendary status amongst Windows system administrators. Yesterday, Microsoft employee David Fowler announced the release of the first Sysinternals tool for Linux, process dump creation utility ProcDump. Microsoft executive Mario Hewardt, Principal […]

KNote X is an upcoming Windows 10 2-in-1 from a little known Chinese OEM – Shenzhen Alldocube. The company is quite popular on Chinese online retailers, but unlike Chuwi, hasn’t made a mark outside the country as yet. These OEMs are known for offering middle-of-the-road devices – Android and Windows tablets and laptops – at […]

After a combination of leaks, private testing, and random public testing, Microsoft has finally made an official announcement about the new shopping cart and wish list features that are coming to the Microsoft Store on Windows 10 PCs, web, and Xbox One consoles. Both features (see above image) work the same as most other shopping […]

The Windows 10 Mobile and PC digital diary app, Diarium, is currently selling in the Microsoft Store app store with a 75% discount. The app usually retails for $19.99 but is now $4.99 for the next 12 days only. Diarium, as its name suggests, is an app that functions as a diary or journal. It […]

FROM eight storeys up at the top of a building, crag martins (Ptyonoprogne rupestris) zoom and soar close to windows. There are canyons of smaller and taller structures and these plump, sturdy cliff dwellers are chasing insects through shafts of...

Seller requests closing to take place at Manausa Law Firm, P.A.** Easy to maintain 3 bedroom, 2 bathroom home! Generous parking area, spacious kitchen with stainless appliances and laundry room access, large windows with crown moulding, and easy flow for entertaining! Partially fenced back yard with storage shed and a fire pit is perfect for those upcoming fall nights and football parties!

A 2 bedroom/2 bath condo with a golf course view!
Move right in with new replacement windows in front, Granite counters, HVAC (2012).
Enjoy the golf course view from your living room, master bedroom, and screened lanai with new railing.
Centrally located to everything Sarasota has to offer and walking distance to shopping.

Afterparty Platform: PC (Microsoft Windows and Mac OS) From Night School Studio (the makers of Oxenfree) comes Afterparty, a story of two best friends who suddenly find themselves dead. They embark on a pub-crawl in the depths of hell to gain entrance to Satan’s after-party, the end goal being to beat him in a drinking […]

This is a bright and open 2 bedroom apartment in a quiet house in the Airport Heights neighbourhood of St. John’s.

The bright open-concept kitchen, dining, and living room area is great for relaxing and entertaining. The kitchen has lots of counter space for cooking plus a sit-up bar for your guests. There is plenty of cupboard space for storage. Standing at the double sink, you will have a view overlooking the backyard of the home.

The bathroom has a full-size tub, with a shower.

Both bedrooms are bright with large windows and have closets for storage.

A private washer and dryer are included in the apartment. So no need for trips to the laundromat.

There is a large porch with a big closet, great for storing your coats, shoes, and boots.

The house is in the Airport Heights neighbourhood of St. John’s. There are convenience stores within walking distance. Plus walking and biking trails in the area that connect to the rest of the city. You can leave your apartment on foot and be hiking in Pippy Park in a matter of minutes. There is a softball field and soccer pitch minutes away also. Plus there is a community center that offers many activities.

The house is situated in a prime location and is very close to grocery stores, shopping, restaurants, gyms, golf courses, and the Outer Ring Road.

Small older style 2 bedroom house for rent. Main highway Seal Cove cbs. Close to trades school and Peacekeepers highway. Less than 15 minutes to Mount Pearl. New roof, all new wiring and electrical, new windows new flooring all replaced within last 3 yrs. Fridge stove dishwasher microwave. Detached garage and baby barn. Sits on large mature property. Will consider small pet. Available December

Beautifully updated three bedroom family home, situated in kid friendly neighbourhood just minutes from all amenities. Home features a newly remodelled dream kitchen with patio doors leading out to the southern exposed rear deck with breath taking views of Carbonear and surrounding hillside. Fully developed basement with cozy propane fireplace in the family room and an over sized games room with extra bath and spare bedroom. Fully landscaped backyard, detached garage complete with fridge, TV and wood stove. Some of the upgrades include new argon windows, extra insulation under the new siding, new shingles, two new decks front and rear. This home is move in ready with pride of ownership evident throughout this beautiful home. MLS# 1185164

House and property (with 22x18 shed) for sale. Included with purchase: fridge and stove, washer and dryer, living room furniture, oil furnace, woodstove, vertical blinds, table and chairs. All vinyl windows. Message for further details.

Beautiful 2100 sq.ft 3 bedroom town house (middle unit in photo) for sale. Only two years old. $339,00021 Anderson Avenue . Walk to MUN or Health Sciences Currently rented in one year lease for $1500/month, POU. 2.5 baths which includes master ensuite. Full open basement which is partially petitioned and also plumbed in for additional bathroom. Easily install two more bedrooms. Laminate through out with high quality carpet on stairs. HRV. Fridge, stove, over the range microwave, washer and dryer included. New blinds on all windows. 10'x12' patio. Walk to MUN or Health Sciences Off street parking. Email though this posting or Call/Text 691-8110

Beautiful 3 bedroom town house for sale. End unit on left in photo.Only two years old.Currently rented in one year lease for $1500/month, POU.$335,0002.5 baths which includes master ensuite.Full open basement which is partially petitioned and also plumbed in for additional bathroom. Easily install two more bedrooms.Laminate through out with high quality carpet on stairs.HRV.New blinds on all windows.Fridge, stove, over the range microwave, washer and dryer included.10'x12' patio.Walk to MUN or Health SciencesOff street parking.Email though this posting or Call/Text 691-8110

8Bitdo’s N30 Pro 2 is a full-sized Bluetooth controller that works with Windows, MacOS and Android devices, as well as the Nintendo Switch and the Raspberry Pi. It has an analog d-pad, clickable thumbsticks, motion controls and vibration.

GA-Macon, POSITION PROFILE The Sr. Systems Administrator responsibilities consist of design, implementation, maintenance, support and disaster recovery for various systems, including, but not limited to, Microsoft Systems, VMWare, and Active Directory. In addition, the Administrator will help manage the full technical environment, including systems administration responsibilities across a wide array of tech

Microsoft Edge includes a new word-lookup tool, and in this guide, we'll show you how to use it.
Windows 10 version 1809 (October 2018 Update) delivers an updated version of Microsoft Edge that introduces a number of improvements, including a new feature to look up definitions for words when reading a document or page without needing to open a new tab.
The new dictionary comes as built into Microsoft Edge, and unlike the Google dictionary extension for Chrome, it's a feature that works while viewing a web page using Reading view, reading an ebook, or working with a PDF file.
In this Windows 10 guide, we'll walk you through the steps to get started with the Edge dictionary available starting with the October 2018 Update.
...

Looking for a thin, powerful laptop that runs Windows? The incredibly shrinking bezels continue with new ASUS ZenBooks, boasting powerful insides and a near-borderless screen. Asus has three new ZenBooks out, in 13, 14, and 15” screen sizes. ASUS isn’t kidding about being nearly borderless, promising the laptops have a 95% screen to body ratio. These bezels seem so unrealistic,...

Zombie Bitcoin Defense Free Download PC Game setup in single direct link for Windows. It is an amazing action game.
Zombie Bitcoin Defense PC Game 2018 Overview
Zombie Bitcoin Defense is an action packed top down shooter about surviving the zombie apocalypse with a shop full of weapons and a [...]

Dungeon League Free Download PC Game setup in single direct link for Windows. It is an amazing action, indie and role playing game.
Dungeon League PC Game 2018 Overview
The Dungeon Master has grown bored waiting to kill the next hero foolish enough to enter his dungeon. The answer? Invite all the [...]

Migration Consultant (OpenVMS) > Location: Slough > Division: Transoft > Function: Professional Services > Reporting to: Martin Farndale We’re Advanced Join a business that embraces innovation, gives you the scope to seize every opportunity and will help get you where you want to go. Life at Advanced begins in an unprecedented environment with a role that matters, taking you on a fast paced journey of discovery, however big that might be. We’re one of the UK’s largest and fastest growing software companies. True partnership is the defining thing that makes us different from the competition. We pride ourselves on delivering focused software solutions for public sector, enterprise commercial and health & care organisations that simplify complex business challenges and deliver immediate value. Team & Role We are seeking an experienced Migration Consultant with a successful track record of delivering application and data migration solutions from the OpenVMS legacy system. The Migration Consultant will specialise in refactoring legacy systems from 3rd generation languages such as C/C++. This role requires in depth C/C++ skills and extensive real-world experience preferably in a broad range of business software systems. This role involves the application of Transoft technologies to help deliver a sustainable future for crucial business applications. The Requirements You will: Learn and operate the Transoft modernisation toolset Function as an effective project team member Maintain a cooperative nature at all times Maintain the ability to both take instruction and work under own initiative as required. Be able to maintain a sharp focus on and finish intensive projects. Have a good awareness of technological developments and best practice Be able to adapt and apply new ideas as appropriate Successfully hand over solutions to relevant internal or external staff, including knowledge transfer Deliver and promote quality, excellence and continuous service improvement for Professional Services engagements Keep abreast of new features and functions made available within the Advanced 365 suite of products We would like you to have: The successful candidate requires strong consultancy skills and, and is an excellent motivator of individuals in order to meet deadlines and manage change. You will have real-world experience of delivering concurrent small & medium scale projects on time and within budget. Experience needed: 5+ years of C/C++ design, implementation and support In depth experience within Windows and Linux development including operating systems APIs OpenVMS Experience is an advantage Database design and implementation – SQL is a must Modern build and deployment experience (i.e. Gradle, Cmake) Scripting in Perl or Python an advantage Transoft toolset training will be provided Essential Skills: Strong communication skills, written and spoken Well presented and good interpersonal skills Comfortable in both structured and unstructured working environments Energy and enthusiasm to deliver a successful project Comfortable in customer facing role Education / Qualifications A university degree in a relevant subject Join the A Team Insert Key benefits from working within the department / function (your sell) Excellent benefits from day one: contributory pension, life insurance, income protection insurance, childcare voucher salary sacrifice, cycle to work scheme, and employee assistance programme 25 days holidays Special focus on training and development with the opportunity to excel your career from our internal Talent Development Team The ability to work with engaged colleagues who share a passion for solving business problems Working in an organisation that encourages 360 feedback at all levels Be part of an organisation that has recently been ranked by Deloitte in the Top 50 fastest growing tech Companies

Improving your system productivity is essential for getting work done more swiftly. With this in mind, many of us turn to all-in-one maintenance tools so we can optimise our computer, clean junk and fully remove installed applications. It’s just easier to own one tool to perform all your key tasks. Parallels recently launched Toolbox for Mac which offered a number of system tools from a handy drop-down menu. Frankly, when the first Toolbox was released, it offered little more than what was already available in macOS. You could quickly take a screengrab, record your screen, create an archive and more.… [Continue Reading]

Microsoft has embraced Linux more and more over the years, and the latest demonstration of this is the company's decision to port the free Sysinternals utilities to work on the platform. The first tool to make its way to Linux is ProcDump, which can be used to create crash dumps. While not as feature-rich as the Windows version, the Linux port is still a valuable tool. And, importantly, there are more Systinternals tools making their way to Linux. Systinternals software long-proved popular with Windows users. So much so, that over a decade ago Microsoft decided to buy the company behind… [Continue Reading]

(EMAILWIRE.COM, November 06, 2018 ) An exclusive Digital Pathology Market research report created through broad primary research (inputs from industry experts, companies, and stakeholders) and secondary research, the report aims to present the analysis of Global Digital Pathology Market by Type, By Application, By Region - North America, Europe, South America, Asia-Pacific, Middle East and Africa. The report intends to provide cutting-edge market intelligence and help decision makers take sound investment evaluation. Besides, the report also identifies and analyses the emerging trends along with major drivers, challenges and opportunities in the global Digital Pathology Market. Additionally, the report also highlights market entry strategies for various companies across the globe.

These players have been upgrading their product portfolio by applying for approvals as well as launching new products. For instance, in July 2017, The 3D Histech launched the windows 10 compatible TMA control 2.7 SW, which can be applied for the complete tissue microarray (TMA) products. This update was aimed to aid in placing different samples into one paraffin block. In addition, it was also designed to save time & costs of tissue preparation, staining and slide preparation.

Worldwide Digital Pathology Market Analysis to 2025 is a specialized and in-depth study of the Digital Pathology industry with a focus on the global market trend. The report aims to provide an overview of global Digital Pathology Market with detailed market segmentation by product/application and geography. The global Digital Pathology Market is expected to witness high growth during the forecast period. The report provides key statistics on the market status of the players and offers key trends and opportunities in the market.

The report provides a detailed overview of the industry including both qualitative and quantitative information. It provides overview and forecast of the global Digital Pathology Market based on product and application. It also provides market size and forecast till 2025 for overall Digital Pathology Market with respect to five major regions, namely; North America, Europe, Asia-Pacific (APAC), Middle East and Africa (MEA) and South America (SAM), which is later sub-segmented by respective countries and segments. The report evaluates market dynamics effecting the market during the forecast period i.e., drivers, restraints, opportunities, and future trend and provides exhaustive PEST analysis for all five regions.

Also, key Digital Pathology Market players influencing the market are profiled in the study along with their SWOT analysis and market strategies. The report also focuses on leading industry players with information such as company profiles, products and services offered, financial information of last 3 years, key development in past five years.

Reason to Buy
- Save and reduce time carrying out entry-level research by identifying the growth, size, leading players and segments in the global Digital Pathology Market
- Highlights key business priorities in order to assist companies to realign their business strategies.
- The key findings and recommendations highlight crucial progressive industry trends in the Digital Pathology Market, thereby allowing players to develop effective long term strategies.
- Develop/modify business expansion plans by using substantial growth offering developed and emerging markets.
- Scrutinize in-depth global market trends and outlook coupled with the factors driving the market, as well as those hindering it.
- Enhance the decision-making process by understanding the strategies that underpin commercial interest with respect to products, segmentation and industry verticals.

What does David Dimbleby know about the people of Scotland!! Where I live there was YES and NO leaflets in windows and there was NEVER any arguments or threats between neighbours.
I would love to know where he got his information!!

By default, Microsoft Windows isn't even bootable on the new Apple systems until enabling support for Windows via the Boot Camp Assistant macOS software. The Boot Camp Assistant will install the Windows Production CA 2011 certificate that is used to authenticate Microsoft bootloaders. But this doesn't setup the Microsoft-approved UEFI certificate that allows verification of code by Microsoft partners, including what is used for signing Linux distributions wishing to have UEFI SecureBoot support for Windows PCs.

Are you interested in running iOS apps on your Windows and Mac computers? Then getting an iOS emulator would be the best option available for you to consider. It can provide you with the chance to get a smooth experience when you are running iOS apps on Mac and Windows. What exactly is an iOS […]

The Estinctwexforda.info pop-ups are a social engineering attack that tries to trick users into subscribing to its push notifications so that they can send unwanted advertisements directly to your desktop. These Estinctwexforda.info pop-up ads are caused either by malicious advertisements on the sites you visit or adware. This guide was written to help Windows users […]

The Indie-news.club pop-ups are a social engineering attack that tries to trick users into subscribing to its push notifications so that they can send unwanted advertisements directly to your desktop. These Indie-news.club pop-up ads are caused either by malicious advertisements on the sites you visit or adware. This guide was written to help Windows users […]

The Housinesdepara.info pop-ups are a social engineering attack that tries to trick users into subscribing to its push notifications so that they can send unwanted advertisements directly to your desktop. These Housinesdepara.info pop-up ads are caused either by malicious advertisements on the sites you visit or adware. This guide was written to help Windows users […]

The Cpnotesz.com pop-ups are a social engineering attack that tries to trick users into subscribing to its push notifications so that they can send unwanted advertisements directly to your desktop. These Cpnotesz.com pop-up ads are caused either by malicious advertisements on the sites you visit or adware. This guide was written to help Windows users […]

The Entriedreligible.info pop-ups are a social engineering attack that tries to trick users into subscribing to its push notifications so that they can send unwanted advertisements directly to your desktop. These Entriedreligible.info pop-up ads are caused either by malicious advertisements on the sites you visit or adware. This guide was written to help Windows users […]

58478
A village stone house, good general condition, with a nice terrace and great view. Some renovation works to complete such as electricity, heating, paint and plumbing. New PVC windows. A fire place can be created. On the ground floor : spacious...
4 rooms1 bathroommortgageheatingfitted kitchenterraceinternet

119950
A superb large three bedroom semi-detached property. No chain. Recently renovated. Standing on a generous plot. New windows and doors throughout. New kitchen. Large detached garage and gated parking for several vehicles. Large gardens and patio....
1 bathroomheatingfitted kitchenparkinggardenrenovated

My ETW trace collection tool has reached a significant milestone. It becomes more and more THE tool if you want to record ETW traces while recording also user input (keyboard, mouse) along with screenshots to exactly see which actions the user did perform to get into that state. What really stands out that you can use it also as distributed trace collection tool to start ETW tracing on two machines simultaneously while sending the user input also to the remote machine and record them on the remote machine as well. That allows you to easily find distributed problems with one profiling run. Since you can navigate by the recorded user events you do not need to rely on exactly synchronized clocks between both machines.

You can see more of it at https://etwcontroler.codeplex.com/. What is best that on Windows 10 you only need to unzip ETWController v2.1 and you can already record right away because WPR (Windows Performance Recorder) is part of Windows 10. That enables interesting scenarios like sending preconfigured ETWController zips to clients which only need to

Unzip ETWController.zip

Start ETWController.exe

Go to Trace Collection and press Start

Reproduce the use case

Press Stop

Send gathered data to service

The new version also can record Screenshots for every mouse click and Enter key press. To make sense of the many collected screenshot files during trace stop a HTML report is created. It is located besides the ETL file in a directory named xxx.etl.Screenshots which contains a Report.html file. You can view it with the browser where you can configure the image size as you need it to get a quick overview.

For each click event a file with the name Screenshot_dd.jpg is saved where the number is the number of input events (mouse down, mouse up, keyboard down) since trace start. To check if the UI did respond after 500ms a second screenshot is taken with the name Screenshot_ddAfter500ms.jpg. Now it is much easier to find the interesting time points in an ETL file. If you record screenshots from your clients you have to make sure that you do not disclose personal data or you analyze the data on the client machine directly to ensure that no personal information is leaked.

For each click event the mouse location is marked with a red square because the mouse cursor is not part of the captured data.

If you record the same use case several times you need to ensure that the old data is not overwritten by accident. ETWController helps you by checking the Append Index checkbox by default. Besides the checkbox there is textbox with a number which is incremented every time profiling is stopped. If you start/stop profiling several times you will get in your directory files of the form

which makes it hard to accidentally overwrite data from important profiling runs. This of course works also for distributed profiling where files on both machines get the same extension appended. After saving the data you can click on Show Output button which will open the generated ETL file with WPA with my own Simplified profile which is shipped with ETWController now as well!

Simplified WPA Profile Included

It is located in the ETW folder under besides the ETWController executable and named Simple.wpaProfile.

I will continue to update the profile under source control and not on Google drive anymore. To see what you can do with the custom profile check out

Now you can make sense of the input events and check out how the UI did look at specific times. The Report.html gives you access to all of the screenshots to correlate user visible changes with the profiling timeline.

Performance Regression

Another very common use case is to find performance degradations of regression tests. For that it makes sense to get as many screenshots as possible. By default we record a screenshot every 2s. But you can enter in the Configuration dialog for the Screenshot Timer value any number down to 100ms.

Taking a screenshot needs about 100ms so in effect we get a screenshot every 200ms. When we execute the use case with with so many screenshots we can first visually compare the screenshots to see if the reported degradation actually exists. I have seen many reported performance degradations gathered by automated tests which did vanish when the same use case was executed manually. Automated tests normally use some easy to gather finish event which is used to calculate the measured duration of the test case. If the software changes that event might be slightly or completely off with what the user experiences. Having many screenshots is a huge bonus. The customer/tester no longer needs to tell you what he did because you see it in the screenshots anyway and you do not need to try to reproduce the issue in the lab which will often fail.

Other Tools

ETWController does not force you to drive everything via the UI. You can also use it as sophisticated keyboard and mouse input event logger with screenshot functionality. If you have your own or better tracing tooling you are free to use it.

This command line will capture mouse and keyboard events and save the screenshots to your own directory without any visible UI

If you have ever tried to create a WPF application which has a larger memory footprint (>500MB) you will notice some random hangs in the UI which become several second hangs for no apparent reason. On a test machine with a decent Xeon CPU I have seen 35s UI hangs because WPF frequently (up to every 850ms) calls GC.Collect(2). The root cause of the problem is that WPF was designed with a business application developer in mind who never gets resource management right. For that reason WPF Bitmaps and other things do not even care to implement the IDisposable interface to clean up resources deterministically. Instead the cleanup is left as an exercise for the Garbage Collector with the Finalizer thread working hand in hand.

That can lead to problems. Suppose a 32 bit application where the user is scrolling through a virtual ListView with many bitmaps inside it. This operation will cause the allocation of many temporary Bitmaps which will quickly become garbage. Because the Bitmaps are small objects on the managed heap but the actual Bitmap data is stored in unmanaged memory the Garbage Collector sees no need to clean things up for a long time. In effect it did happen that your application ran of unmanaged memory long before the Garbage Collector was able to release the bitmaps in the Finalizer. That lead to one of the worst hacks in WPF. It is called MemoryPressure. Lets have a look how it is implemented:

//// About the thresholds: // For the inter-allocation threshold 850ms is the longest time between allocations on a high-end// machine for an image application loading many large (several M pixel) images continuously.// This falls well below user-interaction time (which is on the order of several seconds) so it// differentiates nicely between the two//// The initial threshold of 1MB is so we don't force GCs when the total amount of unmanaged memory// isn't a big deal. The point of this code is to stop unmanaged memory from spiraling out of control// at that point it's typically in the 10s of MBs. This threshold thus could potentially be increased// but current testing shows it is adequate.//// The max time between collections was set to 30 sec because that is a 'long time' - this is// for the case where allocations (and frees) of images are happening continously without // pause - we haven't seen scenarios that do this yet so it's possible this threshold could also // be increased// privateconstint INITIAL_THRESHOLD = 0x100000; // 1 MB initial thresholdprivateconstint INTER_ALLOCATION_THRESHOLD = 850; // ms allowed between allocationsprivateconstint MAX_TIME_BETWEEN_COLLECTIONS = 30000; // ms between collections

/// <summary>/// Check the timers and decide if enough time has elapsed to/// force a collection/// </summary>privatestaticvoid ProcessAdd()
{
bool shouldCollect = false;
if (_totalMemory >= INITIAL_THRESHOLD)
{
//// need to synchronize access to the timers, both for the integrity// of the elapsed time and to ensure they are reset and started// properly//lock (lockObj)
{
//// if it's been long enough since the last allocation// or too long since the last forced collection, collect//if (_allocationTimer.ElapsedMilliseconds >= INTER_ALLOCATION_THRESHOLD
|| (_collectionTimer.ElapsedMilliseconds > MAX_TIME_BETWEEN_COLLECTIONS))
{
_collectionTimer.Reset();
_collectionTimer.Start();
shouldCollect = true;
}
_allocationTimer.Reset();
_allocationTimer.Start();
}
//// now that we're out of the lock do the collection//if (shouldCollect)
{
Collect();
}
}
return;
}
/// <summary>/// Forces a collection./// </summary>privatestaticvoid Collect()
{
//// for now only force Gen 2 GCs to ensure we clean up memory// These will be forced infrequently and the memory we're tracking// is very long lived so it's ok//
GC.Collect(2);
}

This beauty is calling GC.Collect(2) every 850ms if in between no Bitmap was allocated or every 30s regardless of how many Bitmaps were allocated. With .NET 4.5 we have got concurrent garbage collection which dramatically reduces blocking all application threads while a garbage collection is happening. For common application workloads a "normal" .NET application gets 10-15% faster by doing no change to the code. These improvements are all nullified by calling a forceful full blocking garbage collection.

To demonstrate the effect I have created a simple test application which allocates on a background thread 1 GB of small objects while on the UI thread we allocate one WPF bitmap every 850ms and compare that to allocating an "old" WinForms Bitmap object also every 850ms.

If you measure for some heap sizes you quickly see that your application will become dramatically slower the more memory it consumes due to the forced blocking garbage collection caused by WPF. The x-axis shows the managed heap size in MB and the y-axis shows the time needed to allocate 1 GB of small objects in a background thread.

You have a multi GB WPF application and the user experience is just awful and slow? You can google for good answers on Stackoverflow

which tell you that you need to use Reflection to set private fields in the internal MemoryPressure class of WPF. Not exactly a production grade "fix" to the issue.

But there is hope. The new public beta of .NET Framework 4.6.2 contains a fix for it. The MemoryPressure class is gone and your Stackoverflow "fix" will cause exceptions if you did not prepare for the impossible that Microsoft did dare to remove internal classes. WPF now adheres to the long time recommended GC.AddMemoryPressure call to tell the Garbage Collector that some managed objects also consume significant unmanaged memory.

With .NET 4.6.2 you finally get the possibility back to create snappy managed applications without long forced garbage collection pauses. You can measure the GC pause times with my custom WPA profile in no time:

That is nice but you can see with my custom WPA profile and the streamlined default.stacktags file even more:

There you can clearly see that while the managed heap grows the Induced GC times get bigger just as you would expect from the GC regions. To get the same view you need to download my simplified WPA profile which I have updated with the latest stacktags I found useful during past analysis. To make that active you need to open from the menu Trace - Trace Properties and remove current file and add the downloaded stacktags file. Or you can also simply overwrite the default.stacktags file that comes with WPA.

The new improved stacktags file gives you fast insights into your application and your system which is not really possible with other tools. With a nice stacktags file you can create your very own view of the system. The updated stacktags file contains tags for common serializers, exception processing, and many more things which are useful during analysis of performance issues or application failures.

Finding handle leaks in all processes at once for all handle types without a debugger is no longer impossible. Since Windows 8.1 (0?) each handle creation and close call is instrumented with an ETW event. You only need to turn it on, execute your use case for some minutes or hours if you really need to and then stop the recording.

To start full handle tracing you need to install the Windows Performance Toolkit from theWindows 10 SDK or WDK. Then enter in an Administrator shell

wpr -start Handle

Execute your use case

wpr -stop c:\temp\Handle.etl

Then you can open the resulting .ETL file with WPA and add the graph Handles - Outstanding Count by Process to your analysis view.

Now you can filter for your process (e.g. in my case I did start Visual Studio). The original view gives me a system wide view of all processes which did allocate handles.

That is a nice view but if you are after a handle leak you need to Create Stack. No problem. Right click on the table header and add the Create Stack to the column list. Then you should load the symbol from MS and add your local symbol paths

With the call stacks you can drill into the allocation stack of any handle and search for your leak:

The graph nicely shows the not yet freed handles but the table shows all allocations which can be a bit confusing when you search for the not yet released handles. For big handle leaks the existing view is already enough but if you need in the table to drill down only into call stacks of not yet released handles you need to add a filter to exclude all lines in the table which have released a handle before the trace was stopped.

More Details

To add that filter click on the open the gear icon or press Ctrl+E:

Because we are doing advanced things we click on the Advanced icon

and there we can finally add the trace end time which is visible at the bottom of the WPA main window

Now the graph and the table is updated which now only shows the handles which have not been released since the start of Visual Studio in our example which should match the number of allocated handles shown by Task Manager.

You can also get more fancy. Normally I have some test which shows after some time a handle leak in a specific process. I start leak tracing and then the test and later I stop it. Since I do not want to treat first time initialization effects as leaks I can exclude the e.g. first 5 minutes of the test to get rid of first time init effects. I also want to make sure that I do not get handles as leaks which are allocated at the end because the test was still running at the end of the trace. To do that I need to look for recurring patterns in the trace and exclude all allocated handles which were created at some later time when the test run was just complete. The final result is a filter which hides all entries which match

Why So Late?

I have no idea why this very useful capability of WPA was never documented anywhere. It showed up in the Windows 8 SDK years ago but Handle leak tracing did never work because I was at that time still with Windows 7.

Which Handle Type did I Leak?

The easiest way is to use another tool. Process Hacker is a Process Explorer clone which can show for any process a nice summary. Double click on a process and select the Statistics tab:

When you click on Details you can sort by Handle Count and you immediately know for which handle type you are searching a leak:

PerfView for Advanced Recording

The only other tool I know of which can enable handle leak tracing is PerfView v1.9 from 2/19/2016 or later

PerfView has the unique capability to stop tracing based on a performance counter threshold. This is extremely useful to find e.g. a sudden handle spike which occurs during a stress test over night at 5 a.m. in the morning but when you arrive at 6 a.m. (you are already too late ) at the office the handle spike will long be overwritten by newer handle allocations of the 500MB ring buffer. Now you can get your breakfast and arrive relaxed at 9 a.m where you can start analyzing the random handle spike which your colleagues were missing while they were sitting in front of Windbg over night and present the results at 10 a.m in the morning to your manager.

The only issue I have with PerfView is that its performance counter query is locale sensitive which makes it not trivial to specify it on e.g. a Hungarian machine. For the record: On my German machine I can start Handle leak tracing which stops when the performance counter for the the first devenv instance has a value greater than 2000 handles with

The feature finally seems to have been set free with the Windows 10 SDK but handle leak tracing exists also since Windows 8.1 (0?) in the kernel but no tool was capable to enable it until now. Before that ETW feature Handle leaks have been quite hard to track down but with such advanced and pretty easy to use tooling it is just a matter of two command line calls to get all allocated handles from all processes in one go.

If you leak User (Windows, Menus, Cursors, …) or GDI objects (Device Contexts, Brushes, Fonts, …) you still need to resort to intercepting the corresponding OS methods in your target process like I have shown in Generic Resource Leak Detection with ETW and EasyHook but as usual you need to use the right tool for the job at hand to nail all bugs of your application.

Conclusions

With the addition of ETW tracing to handle allocations it has never been so easy to solve handle leaks. Previously it was a pretty complex undertaking but now you can follow the steps above and you will have a nearly 100% fix rate if you analyze the gathered data correctly. If this has helped you to solve a long searched leak or you have other useful information you want to share sound off in the comments.

I had an interesting case where a new WPF control was added to a legacy WinForms application. The WPF control worked perfectly in a test application but for some strange reason it was very slow in final WinForms application where it was hosted with the usual System.Windows.Forms.Integration.ElementHost. The UI did hang and one core was always maxed out. Eventually it built up after some minutes but even simple button presses did cause 100% CPU on one core for 20s. If you have high CPU consumption the vegetative reaction of a developer is to attach a debugger and break into the methods to see where the issue is. If you use a real debugger like Windbg you can use the !runaway command to find the threads with highest CPU usage

Eventually I would find some non waiting stacks but it was not clear if these were the most expensive ones and why. The problem here is that most people are not aware that the actual drawing happens not in user mode but in an extended kernel space thread. Every time you wait in NtUserWaitMessage the thread on the kernel side can continue its execution but you cannot see what's happening as long as you are only looking at the user space side.

If debugging fails you can still use a profiler. It is about time to tell you some well hidden secret of the newest Windows Performance Toolkit. If you record profiling data with WPR/UI and enable the profile Desktop composition activity new views under Video will become visible when you open the trace file with WPA. Most views seem to be for kernel developers but one view named Dwm Frame Details Rectangle By Type is different. It shows all rectangles drawn by Dwm (the Desktop Window Manager). WPA shows not only the flat list of updated rectangles and its coordinates but it draws it in the graph for the selected time region. You can use this view as poor mans screenshot tool to visually correlate the displayed message boxes and other windows with the performed user actions. This way you can visually navigate through your ETL and see what windows were drawn at specific points in your trace!

That is a powerful capability of WPA which I was totally unaware until I needed to analyze this WPF performance problem. If you are more an xperf fan you need to add to your user mode providers list

Microsoft-Windows-Dwm-Core:0x1ffff:0x6

and you are ready to record pretty much any screen rectangle update. This works only on Windows 8 machines or later. Windows 7 knows the DWM-Core provider but it does not emit the necessary events to draw the dwm rectangles in WPA. The rectangle drawing feature of WPA was added with the Win10 SDK Release of December 2016. Ok so we see more. Now back to our perf problem. I could see that only two threads are involved consuming large amounts of CPU in the UI thread and on the WPF render thread for a seemingly simple screen update. A little clicking around in the UI would cause excessive CPU usage. Most CPU is used in the WPF rendering thread

If that does not make much sense to you, you are in good company. The WPF rendering thread is rendering a composite window (see CComposition::Present) which seems to use a feature of Windows which also knows about composite Windows. After looking with Spy on the actual window creation parameters of the hosting WinForms application

it turned out that the Windows Forms window had the WS_EX_COMPOSITED flag set. I write this here as if this is flat obvious. It is certainly not. Solving such problems always involves asking more people about their opinion what could be the issue. The final hint that the WinForms application had this extended style set was discovered by a colleague of me. Nobody can know everything but as a team you can tackle pretty much any issue.

A little googling reveals that many people before me had also problems with composite windows. This flag does basically inverse the z-rendering order. The visual effect is that the bottom window is rendered first. That allows you to create translucent windows where the windows below your window shine through as background. WPF uses such things for certain visual effects.

That is enough information to create a minimal reproducer of the issue. All I needed was a default Windows Forms application which hosts a WPF user control. The complete zipped sample can be found on Onedrive.

When these three conditions are met then you have a massive WPF redraw problem. It seems that two composite windows cause some loops while rendering inside the OS deep in the kernel threads where the actual rendering takes place. If you let WPF use HW acceleration it seems to be ok but I have not measured how much GPU power is then wasted. Below is a screenshot of the sample Winforms application:

After was found the solution was to remove the WS_EX_COMPOSITED window style from the WinForms hosting window which did not need it anyway.

Media Experience Analyzer

The problem was solved but it is interesting to see the thread interactions happening while the high CPU issue is happening. For that you can use a new tool of MS named Media Experience Analyzer (XA) which was released in Dec 2015 but the latest version is from Feb 2016. If you have thought that WPA is complex then you have not yet seen how else you can visualize the rich ETW data. This tool is very good at visualizing thread interactions in a per core view like you can see below. When you hover over the threads the corresponding context switch and ready thread stacks are updated on the fly. If you zoom out it looks like a star field in Star Trek just with more colors.

If you want to get most out of XA you can watch the videos a Channel 9 which give you a pretty good understand how Media Experience Analyzer (XA) can be used

When should you use WPA and when Media Experience Analyzer?

So far the main goal of XA seems to be to find hangs and glitches in audio and video playback. That requires a thorough understanding of how the whole rendering pipeline in Windows works which is huge field on its own. But it can also be used to get a different view on the data which is not so easy to obtain in WPA. If threads are ping ponging each other this tool makes it flat obvious. XA is already powerful but I am not following entirely its UI philosophy where you must visually see the issue in the rendered data. Most often tabular data like in WPA is more powerful because you can sort by columns and filter away specific call stacks which seems not to be possible with XA. What I miss most in XA is a simple process summary timeline like in the first screenshot. XA renders some nice line graphs but that is not very helpful to get a fast overview of the total CPU consumption. If you look at the complete trace with the scheduler events and the per process CPU consumption in

XA

WPA

I am having a much easier time in WPA to identify my process with the table and color encoding. In XA you always need to hover over the data to see it actual value. A killer feature in XA would be a thread interaction view for a specific process. Ideally I would like to see all threads as bars and the bar length is either the CPU or wait time. Currently I can only see one thread color encoded on which core it is running. This is certainly the best view for device driver devs but normally I am not interested in a per core view but a per thread timeline view. Each thread should have a specific y-value and the horizontal bar length should show either its running or waiting time (or both) and with a line the readying thread as it is already done today.

That would be the perfect thread interaction view and I hope that will be added to XA. The current version is still a 1.0 so expect some crashes and bugs but it has a lot of potential. The issues I encountered so far are

If you press Turn Symbol Off while is still loading it crashes.

The ETL file loading time is very high because it seem to include some private MS symbol servers where the UI hangs for several minutes (zero CPU but a bit network IO).

UI Redraws for bigger (>200MB) ETL files are very slow. Most time seems to be spent in the GPU driver.

XA certainly has many more features I have not yet found. The main problem with these tools is that the written documentation only touches the surface. Most things I have learned by playing around with the tools. If you want share your experiences with WPA or XA please sound off in the comments. Now stop reading and start playing with the next cool tool!

The number of bugs produced by developers are legion but why are advanced debugging skills still rare in the wild? How do you solve problems if you do not have the technical know how to to a full root cause analysis across all used tech stacks?

Simple bugs are always reproducible in your development environment and can easily be found with visual debuggers in your favorite IDE. Things get harder if your application consistently crashes at customer sites. In that case often environmental problems are the root cause which mostly cannot be reproduced in the lab. Either you install a debugger on production machines of your customers or you need to learn how to use memory dumps and analyze them back home.

There are also many other tools for Windows troubleshooting available like Process Explorer, Process Monitor, Process Hacker, VMMap, … which help a lot to diagnose many issues without ever using a debugger. With some effort you can learn to use these tools and you are good to solve many problems you can encounter during development or on customer machines.

Things get interesting if you get fatal sporadic issues in your application which results in data loss or it breaks randomly only on some customer machines. You can narrow it down where the application is crashing but if you have no idea how you did get there then some industry best practice anti patterns are used:

You know the module which breaks and you rewrite it.

You do not even know that. If the problem is sporadic tinker with the code until it gets sporadic enough to be no longer an urgent problem.

That is the spirit of good enough but certainly not of technical excellence. If you otherwise follow all the good patterns like Clean Code and the Refactoring you still will collect over the years more and more subtle race conditions and memory corruptions in central modules which need a rewrite not because the code is bad but because no one is able to understand why it fails and is able to fix it.

I am surprised that so many, especially small companies can get away with dealing technical debt that way without going out of business. Since most software projects are tight on budget some error margin is expected by the customers they can live pretty well with worked around errors. I am not complaining that this is the wrong approach. It may be more economical to bring a green banana to market to see what the customers are actually using and then polish the biggest user surfacing features fast enough before the users will step away from the product. The cloud business brings in some fascinating opportunities to quickly roll out software updates to all of your customers with new features or fixes. But you need to be sure that the new version does not break in a bad way or all of your customers will notice it immediately.

Did you ever encounter bugs which you were not able to solve? What creative solutions did you come up with?

When you are tracking down handle or other leaks you usually need to resort to Windbg which is a great tool for unmanaged code but not so much for managed code because the usual stuff like !htrace only get the unmanaged call stack. For managed code this approach breaks because you will not see which managed callers did allocate or free a handle. Other Windbg approaches like putting a breakpoint and then print out the handle and the call stack at the allocate/free methods work, but for heavily called methods they slow down the application too much. A much easier and faster approach is to hook the methods in question and write out an ETW event for each acquire/release call where the ETW infrastructure takes care of walking the stack. To hook specific exported methods I usually use EasyHook which is a great library to hook into pretty much any C style exported method.

Below I present a generic way to use hooking of resource acquire and release methods and tracing them with ETW which enables a generic way to track down any resource leakage as long as the necessary functions are within the reach of an API interception framework. To my knowledge that was not done before in any public library and I take therefore credit to make this approach public. Perhaps I have invented this approach but one never knows for sure.

EasyHook Introduction

EasyHook lets you define managed callbacks to intercept publicly exported methods of any processes. In the managed callback you can extend the functionality of an existing API or trace method arguments which is more often the case. If you have e.g. an executable which exports a method named Func which takes 5 arguments and returns an int like

The Hook creation is self explanatory but the hook is disabled for all threads by default. You can either specify a list of threads for which the hook should be enabled ( LocalHook.ThreadACL.SetInclusiveACL(...) ) or you can configure a list of threads for which it should be disabled ( LocalHook.ThreadACL.SetExclusiveACL(…) ). By calling SetExclusiveACL with a null list or new int [] { 0 } which is not a valid thread number you are effectively enabling the hook for all threads.

That was the whirlwind tour into EasyHook which makes it really easy to tap into otherwise unreachable parts of code execution. Most Windows APIs are prepared to be hooked so you can intercept pretty much anything. In practice all C style exported methods are usually within your reach. With that new capability you can do previously impossible things like

Export a dummy method in your C++ dll which is called from code where previously sporadic bugs did happen. E.g. trace each invocation of the dummy method which passes internal state variables from the sporadically failing method.

Now you can hook the empty method and intercept the method calls with full call stacks during your automated builds only on the machines where it did occur with minimal overhead.

Record the passed parameters to any exported method without the need to change the source code (which you probably don't have anyway) of this method.

Alter method arguments at runtime (e.g. redirect file open calls to other files).

….

If you try it out you will find that the interception part works pretty well but for some reason the call stack for x64 processes looks broken when you inspect it in the debugger. We only see the managed intercepted call but no stack frames from our callers:

The problem comes from the fact that under 64 bit stack walking works differently than under 32 bit. When the Windows x64 calling convention was defined the engineers wanted to get rid of the exception handling overhead in the happy code path unlike in the Windows x86 execution model. With x86 every method which contains an exception handler needs to set their exception handler callback even if no exception ever happens.

In contrast to that x64 code has zero overhead in terms of code execution speed no matter how many exception handlers are present in any method! To make that possible in x64 all code must additionally register exception unwind information structures which describe the stack layout and their exception handlers for every part of the function until function exit to enable proper stack unwinding by parsing the additional metadata if an exception is about to be thrown.

x64 Stack Walking and Unwind Information

The exception unwind information is also used by the stackwalker which is part of the reason why our stack is broken. To intercept method calls EasyHook copies assembly code to a memory location which acts as trampoline code for our managed hook callback. This assembler trampoline code has no associated exception unwind information which is a problem if you try to step with a debugger through a hooked method. I tried to add the missing exception unwind information to the dynamically emitted assembler code by patching EasyHook. This led down into the dark corners of assembler ABI in function Prologue and Epilogue.

It took a while to wrap my head around it since it was not clear how to verify that the unwind info registration was correct. Luckily Windbg has a nice command to show you for any address the associated Unwind Information. The command is call .fnent xxx. xxxx is the method address. Below is the output of Windbg for the trampoline assembler code with the now dynamically added Unwind Information:

By using the Windbg stack commands (k and related) I could immediately see if I was on the right track. If you want to see correct x64 method examples you can check out any C/C++ x64 compiler output to see what is possible. The dumpbin tool can also display the exception unwind infos with

dumpbin /unwindinfo dddd.exe/.dll

for any executable or dll.

The version 1 unwind codes are only necessary to describe the function Prologue which is sufficient to reconstruct the stack layout of all saved registers at any time in a method. If you have more exotic Epilogues you need to describe them with specific Unwind codes as well which were introduced with Windows 10 (I guess)? The Unwind Info version 2 is not documented at MSDN anywhere yet but you can have a look at a small snippet of the Windows Kernel stack unwinder which is copied into CoreCLR at: https://github.com/dotnet/coreclr/blob/master/src/unwinder/amd64/unwinder_amd64.cpp. See the methods OOPStackUnwinderAMD64::UnwindEpilogue, OOPStackUnwinderAMD64::VirtualUnwind and related.

After adding correct unwind codes stack walking did work only up to certain point where it still got corrupted because EasyHook repeatedly replaces the return address of the current method with the CLR stub and the backpatch routine to reset the old return address after our managed hook returns. During this time the stack is by definition not walkable because the return address of our caller is not even present on the stack while we are executing our managed callback. To get out of this EasyHook already provides a solution by replacing as direct return address the original hook entry point temporarily to make the stack walkable again. But that works only while we are in a hook handler in terms of EasyHook speak. That means that we can restore the stack only before we call the hooked method. After that we cannot restore the return location anymore. That is a problem if we want to log the returned arguments of a hooked method (e.g. the actual handle value of CreateWindow, CreateFile, …). Since I know that our current return address must be the hook specific assembler code to restore the original caller address I did extend NativeAPI.LhBarrierBeginStackTrace and NativeAPI.LhBarrierEndStackTrace of EasyHook to locate the hook stub code and to replace it with the original entry point of the first hooked function of the current thread. After that change EasyHook is able to get you into a state where the stack is fully walkable before and after you call the hooked method for x64 and x86.

Unfortunately the x64 ETW Stackwalker of Windows 7 stops on any method which has no matching loaded module (e.g. trampoline hook code and JITed code) so this is still not working with ETW events in Windows 7. But custom ETW events with proper Unwind Infos will work on Windows 8 or later. I still wish that MS would fix the Windows 7 ETW stackwalker because this also breaks profiling JITed x64 code on Windows 7 machines. 32 bit code works though since Windows 7.

Generic ETW Resource Tracing

At the time of writing you can see the changes on my EasyHook fork at https://github.com/Alois-xx/EasyHook/commits/develop. I hope that the changes will get into the main line of EasyHook soon. I still have left out an important aspect of a generic ETW leak tracer. How should we trace it so it is easy to diagnose with WPA? WPA for Windows 10 can graph custom event data if they are numeric and non hexadecimal Sum columns. We need to trace every resource acquire and release. Usually you get back some handle so this should be included as well. The resource could be something big like a bitmap which has a specific memory size so it makes sense to trace the allocation size as well. The basic ETW trace calls should therefore have the form

With such an API we can trace for CreateFile, CreateWindow, … and such with a simple ETW trace call all relevant information. For simple handle based resources where no metadata is available how big they are it makes sense to use 1 in the Aquire call as allocation size and -1 for the Release call. If all Aquire and Release calls for the Allocator e.g. CreateWindow are balanced out the WPA Sum column for the total allocation size (Field 2) will balance out to zero. If there is a leak we will see a positive excess in WPA for which we can filter.

Here we see e.g. that our free method which seems to do the wrong thing seems to release sometimes with an allocation size of zero 68 times which is exactly the number of outstanding allocation size which is graphed in Field 2.

As an example for CreateWindow we would issue

AcquireResource(windowHandle, 1, "CreateWindow")

and for each hooked DestroyWindow call we would trace

ReleaseResource(handle, -1, "DestroyWindow")

You can use the tool ETWstackwalk as a simple window handle leak detector which is now part of my EasyHook fork. You need to compile ETWStackwalk.exe as Administrator under x64 to register the ETW manifest. You get then something like this as output.

Window Handle Leak Tracing

ETWStackwalk is part of EasyHook which serves as demonstration what can be done with it. Here is its help output

To start window handle leak tracing you need to get the pid of an already running process because EasyHook is not able to inject its code while the process is still starting the CLR which can lead to all sorts of weird errors. Instead you can attach to an already running process and tell ETWStackWalk to enable createwindow/destroywindow tracing. You can also configure and output etl file if you wish our you stick to the default.

I have crated a suitable easyhook.wpaprofile which is dedicated to easily analyze resource leaks. There you get a nice overview per handle and all associated aquire and release operations. The graph shows the duration how long a handle was open until it was closed again. The Start/Stop ETW Opcode Name corresponds to acquire/release calls for this handle. If handles are reused you can still see any excess (too many aquire or release calls) in the AllocSize column which is a sum of all calls where each CreateWindow counts with +1 and all calls to DestroyWindow count as -1. If all calls cancel out you see for that handle an summed AllocSize (Field 2) of zero. The allocation size shows you that in my test while opening a solution in VS2015 where 6 additional windows were created and not yet closed. This makes it easy to filter for the outstanding window handles which have an AllocSize > 0. Then you can open drill into the call stacks and you have found the root cause of pretty much any window handle leak with practically zero speed impact of your application while this approach is generic and can be applied to any resource leak. This tester serves only as demonstration that is based on the combination of ETW and EasyHook but with this concept we can tackle a wide range of sporadic resource leaks which are now solvable with a systematic approach.

If we hook other generic methods like CloseHandle we are in a bad position since it is a generic close method which closes many different handle types. But we can (literally) sort things out if we group in WPA by handle (Field 1) and filter away all handles which have a negative allocation size (Field 2). This removes all handles for which CloseHandle was called but no corresponding e.g. CreateFile call was traced. From that on we can continue to search for outstanding file handles by looking at the totals as usual.

Now you know why the handle is always the first argument to our ETW trace calls. If we position the handle at the same method parameter location in our ETW manifest we can group the acquire and release calls by handle which is exactly what we want in WPA got generate the graph and table above.

Hooking and ETW Performance

I was telling you that this approach is fast. So how fast can it get? The y-axis are not ms but us where the average function execution time of a hooked basically empty int Func(int *a, int *b, int *c, int *d, int *e) was measured. It is interesting that the raw function execution speed from a few nanoseconds jumps to ca. 1us with ETW tracing and goes up to 4us (measured on my Haswell i7-4770K CPU @ 3.50GHz under Windows 8.1) if ETW stackwalking is enabled. That means we can get full ETW events and stack walking at a rate of 250.000 calls/s by using that approach. That is more than most tracing frameworks offer and these do not capture the full call stack. If that is still too slow you can use EasyHook as a pure unmanaged Interceptor which could give you perhaps a factor 2-4 but then you need to take care of a lot more things.

What is odd is that x86 ETW tracing seems to be slower for x86 and under x64 but I did not look into that more deeply. In general x86 code should still be faster if we do the same thing because the CPU caches do not grow but the pointer and instruction size of code and data increases under x64 which simply needs more cycles as Rico Mariani explains in A little 64 bit follow-up. As long as you do not need the big address space you should not simply buy the argument that x64 is the future because it is more modern. If you want to do your users a favor you should favor the small is beautiful approach. With small I mean small code, small and efficient data structures and small code complexity.

WPA Performance Issue (Windows 10 SDK 10.0.10586.15)

When parsing custom ETW events WPA parses the strings in a very inefficient way which can lock up your machine easily for minutes. On my 8 core machine the custom view takes minutes to build up and uses 4 cores to parse the strings from my custom ETW events. The problem is that WPA also uses code from a static helper class which is grossly inefficient. Have a look at this profiling data:

WPA uses 4/8 cores where 3 cores are spending CPU cycles spinning to get a lock while only one core is doing actual work (12,5% CPU is one core on a 8 core machine). This effectively single threaded operation is also very inefficient implemented. The problem comes from TDHHelper.GetStringForPropertyAtIndex which parses a string from an ETW event. This is done in a static class under a global lock:

When I drill down into one thread I see that EnsureCapacity and EnsureRoomForTokensAndClear are responsible for 43% of the execution time which would go away if the List<IntPtr> instances would be reused. It could also help that the parsed strings are cached which would reduce the time quite a bit if I play around with groupings in WPA. That is a major bottleneck and should be solved as my top performance issue in the current WPA implementation. I have not noticed such long delays with WPA 8.1. A quick check reveals that the code is nearly the same in previous versions. So it was always slow but with more and more custom ETW events this display is becoming much more important than before. If you look at custom CLR events (e.g. Exceptions grouped by type) you will notice the very slow reaction of WPA due to this issue.

Outlook

If you have read this far congrats. You had to digest quite many technical details from many different areas. If you are searching for a way to get rid of Windbg for resource leak analysis and use a simpler tool ETWStackwalk could be it. It is easy to extend for any other API you are interested in and it works for x86 and x64 (Win 8 and later). If you want to intercept another API you only need to extend the method injector which is declared in

easyhook\Test\ETWStackWalk\Injector.cs

In the Injector you see some examples for CreateFile and CreateWindow which contains mostly boilerplate code. The hook generation code could certainly be simplified. A nice addition would it be to compile the interception code on the fly and pass the actual Interceptor class as configuration file which would make hooking new methods truly dynamic without the need to recompile ETWStackwalk every time a new API needs to be hooked. I had used a variation of this approach for my GDI bug in Windows which actually lead to a Windows Hotfix. But this was always bound to x86 code only. Since I have found some free time to greatly extend the usefulness of the hooking approach by making EasyHook playing nice with x64 stack walking. So far every resource leak (GDI handles, File, Semaphores, Desktop Heap, … ) needed a different approach or tool. It is time to come up with an approach which is powerful and generic enough to tackle all of such resource leaks with one tool and approach. I expect that at least the idea should become state of the art how resource problems are tackled in the next years. If you have used this approach already or you had success with the idea please share it here so we all can benefit by learning from each other. It was fun to get my hands dirty with x64 assembly and to find out how the basics of exception handling really work together. Going deep is my first nature. That was the main reason why I have studied particle physics to really understand how the universe works. The discovery of the Higgs boson was really a great thing and I can at least say that I have tested some small parts of the huge ATLAS detector (which was part of the discovery of the Higgs boson) during my time at the university 15 years ago. It is kind of cool to understand how my computer works from software to hardware down to subatomic level.

If you are wondering what on earth is that guy doing when he is not in front of a computer? Apparently this:

With my two kids, not my boat and my wife we were enjoying two fantastic weeks at the Caribbean sea on St Thomas this January. The kids had great fun in the warm water and this was my first vacation where I did never freeze. Even the nights during winter time were above 25 Celsius. The water is so full of live I have not seen anywhere else. The Mediterranean sea looks like a desert compared to this. If you go to Coral World at St. Thomas you see many different fishes and corrals in aquariums. But if you go snorkeling you will see all animals from the zoo again which is really amazing. During that time home in Germany the people had quite a lot of snow but we were sweating the whole time which really felt odd. That's all for today. Happy bug hunting!

I see a lot of different code and issues. One interesting bug was where someone did remove a few lines of code but the regression test suite did consistently report a 100ms slowdown. Luckily the regression test suite was using ETW by default so I could compare the good baseline with the bad one and I could also take a look at the code change. The profiling diff did not make much sense since there was a slowdown but for no apparent reason in the CultureInfo.CurrentCulture.DisplayName property did become ca. 100ms slower.

How can that be? To make things even more mysterious when they changed some other unrelated code the numbers did return back to normal. After looking into it more deeply I found that the basic application logic did not slow down. But instead some unrelated methods did just become much more CPU hungry. The methods were internal CLR methods named COMInterlocked::CompareExchange64 and COMInterlocked::CompareExchange64. The interesting thing is that it happened only under 32 bit and under 64 bit the error did go away. If you are totally confused by now you are in good company. But there is hope. I had a similar problem encountered already over a year ago. I knew therefore that it has something to do with the interlocked intrinsics for 64 bit operands in 32 bit code. The most prominent on 32 bit is

which is heavily used by the clr interlocked methods. To reproduce the problem cleanly I have written a little C program where I played a bit around to see what the real issue is. It turns out it is ……

Memory Alignment

A picture will tell more than 1000 words:

The CPU cache is organized in cache lines which are usually 64 wide. You can find out the cache line size of your CPU with the nice Sysinternals tools coreinfo. On my Haswell home machine it prints something like this

The most important number for the following is the LineSize of 64 which tells us how big the smallest memory unit is which is managed by the CPU cache controller. Now back to our slow lockcmpxchg8b instruction. The effect of the lock prefix is that one core gets exclusive access to a memory location. This is usually implemented on the CPU by locking one cache line which is quite fast. But what happens if the variable spans two cache lines? In that case the CPU seems to lock all cache lines which is much more expensive. The effect is that it is at least 10-20 times slower than before. It seems that our .NET application in x86 did allocate a 64 bit variable on a 4 byte (int32) boundary at an address that crossed two cache lines (see picture above). If by bad luck we are using variable 7 for a 64 bit interlocked operation we will cause an expensive global cache lock.

Since under 64 bit the class layout is usually 8 byte aligned we are practically never experiencing variables which are spanning two cache lines which makes all cache line related errors go away and our application was working as expected under 64 bit. The issue is still there but the class layout makes it much harder to get into this situation. But under 32 bit we can frequently find data structures with 4 byte alignment which can cause sudden slowdowns if the memory location we are hitting is sitting on a cache line boundary. Now it is easy to write a repro for the issue:

That is all. You only need to allocate on the managed heap enough data so the other data structures will at some point hit a cache line boundary. To force this you can try different byte counts with a simple for loop on the command line:

You can play with the little sample for yourself to find the worst performing version on your machine. If you now look at WPA with a differential view you will find that CompareExchange64 is responsible for the measured difference:

Since that was a such a nice problem here is the actual C Code I did use to verify that the issue only pops up only at cache line boundaries:

… The integrity of a bus lock is not affected by the alignment of the memory field. The LOCK semantics are followed
for as many bus cycles as necessary to update the entire operand. However, it is recommend that locked accesses
be aligned on their natural boundaries for better system performance:
• Any boundary for an 8-bit access (locked or otherwise).
• 16-bit boundary for locked word accesses.
• 32-bit boundary for locked doubleword accesses.
• 64-bit boundary for locked quadword accesses. …

The word better should be written in big red letters. Unfortunately it seems that 32 bit code has a much high probability to cause random performance issues in real world applications than 64 bit code due to the memory layout of some data structures. This is not an issue which makes only your application slower. If you execute the C version concurrently

start cmpxchg.exe && cmpxchg.exe

Then you will get not 1s but 1,5s of runtime because of the processor bus locking. In reality it is not as bad as this test suggests because if the other application uses correctly aligned variables they will operate at normal speed. But if two applications exhibit the same error they will slow each other down.

If you use an allocator which does not care about natural variable alignment rules such as the GC allocator you can run into issues which can be pretty hard to find. 64 bit code can also be plagued by such issues because we have also 128 bit interlocked intrinsics. With the AVX2 SIMD extensions memory alignment is becoming mainstream again. If people tell you that today memory alignment and CPU caches play no role in todays high level programing languages you can prove them wrong with a simple 8 line C# application. To come to an end and to answer the question of the headline: No it is not a CPU bug but an important detail how the CPU performance is affected if you use interlocked intrinsics on variables which span more than one cache line. Performance is an implementation detail. To find out how bad it gets you need to measure for yourself in your scenario.

issues. A special case is network which falls into the wait issue category where we wait for some external resource.

When you download my simplified profile and apply it to the the provided sample ETL file you get can analyze any of the above issues within much less time. Here is a screen shot of the default tab you will see when you open the ETL file.

Stack Tags are Important

The first and most important graph is CPU Usage Sampled with "Utilization by Process And StackTags" which is a customized view. It is usable for C++ as well as for .NET applications. If you ever did wonder what stack tags are good for you can see it for yourself. I have added a stack tag named Regular Expression which is set for all all calls to the .NET Regex class like this:

If more than one tag can match the deepest method with the first stack tag is used. This is the reason why the default stack tag file is pretty much useless. If you add tags for your application they will never match because the low level tags will match long before the tags in your own code could ever match. You have to uncomment all predefined stuff in it. If you use my stack tag file you need to remove WPA provided default.stacktags under Trace - Trace Properties - Stack Tags Definitions. In practice I overwrite the default file to get rid of it. If you leave it you will get e.g. very cheap CPU times for a GC heavy application because all call stacks of your application which will trigger GC stuff are added to the GC stack tag. This makes your application specific stack tags look much slimmer than they really are because your application tags come later in the call stack which are not used to tag your stacks.

Why would I need a custom stack tag file? It makes it much easier to tag high level expensive operations of your application so you can see how much time you were spending with e.g. loading game data, decompressing things, rendering, … This makes it easy to detect patterns what your application was doing at a high level. Besides if you find a CPU bottleneck in your application you can add it under a e.g. a "Problems" node so you can document already known issues which are now easy to spot.

For our problematic application PerformanceIssuesGenerator.exe we see that it was doing for 17,5s CPU Regex stuff (Weight in View is ms in time units). To see how long the actual running time was we need to add the Thread ID column since currently we sum the CPU time of all threads which is not the actual clock time we spent waiting for completion.

The context menu is actually customized. It is much shorter and contains the most relevant columns I find useful. If you want more of the old columns then you can simply drag and drop columns from the View Editor menu which is part of all WPA tables. If you want to remove additional columns you can also drag and drop columns back to the left again. This way you can streamline all of your column selection context menus which is especially useful for the CPU Usage Precise context menu which is huge.

Select A Time Range to Analyze

Now we see that we have two large Regex CPU consumers with a large time gap in between. But what was the application actually doing? This is where marker events from your own application come in handy so you know what high level operation e.g. the user did trigger and how long it did take. This can be achieved with a custom ETW event provider or the special ETW marker events which WPA can display in the Marks graph if any of them are present in your ETL file. To be able to use them to navigate in the ETL file your application must of write them at interesting high level time points which indicate for example the start and stop of a user initiated action. For .NET applications the EventSource class is perfectly suited for this task. Marker events can be written with a custom PInvoke call to

Here is the code to write a marker event in C# which shows up in all kernel sessions. The TraceSession.GetKernelSessionHandles is adapted from the TraceEvent library. If you have "NT Kernel Logger" sessions only (e.g. if you use xperf) then you can use 0 as session handle to write to it.

Now that we have our marks we can use them to navigate to key time points in our trace session:

We see that the first CPU spike comes comes from RegexProcessing which did take 3,1s. The second regex block was active between the Hang_Start/Stop events which did take 2,7s. This looks like we have some real problems in our PerformanceIssueGenerator code. Since we have according to our ETW marks Regex processing, many small objects, many large objects and a hang with simultaneous regex processing we need to select one problem after the other so we can look at each issue in isolation. That is the power which custom ETW providers or ETW Marks can give you. Normally you are lost if you know that you have several issues to follow up. But with application specific context marker events you can navigate to the first regex processing issue. To do that select the first event by and then hold down the Ctrl key while clicking at the stop event to multi select events. Then you can right click on the graph to zoom into the region defined by the first and last event.

Analyze CPU Issue

When you now look at the CPU consumption of the busiest threads we find 2,7s of CPU time. At the bottom WPA displays the selected duration which is 3,158s which matches quite well the reported timing of 3,178s. But the reported thread time of 2,7s is not quite the observed duration. In the graph you see some drops of the CPU graph which indicates that for some short time the thread was not active possibly waiting for something else.

Wait Chain Analysis

That calls for a Wait Chain Analysis. If you scroll down you will find a second CPU graph with the name CPU Usage (Precise) Waits. This customized graph is perfectly suited to find not only how much CPU was consumed but also how long any thread was waiting for something. Please note that this graph does not replace the CPU Usage Sampled graph. I have explained the difference between both CPU graphs already earlier. The column selection context menu of this graph has been massively thinned out to keep only the most relevant columns. Otherwise you would have to choose from over 50 items and the context menu even has scroll bars! Now we have only process, thread id, thread stack as groupings. Next comes a list of fixed columns which are always visible because they are so important. There we see how long each thread did wait for something (WaitForSingleObject …) as total and maximum wait time and the CPU usage in milliseconds. If we sum up for the most expensive thread the wait time of 0,385s and 2,726s of CPU time we get 3,111s which is within a small error margin exactly the time we did get by measuring the start and stop of our regex processing operation.

Is this conclusion correct? Not so fast since a thread can be only running or waiting (it can also wait in the ready queue which is only relevant if you have more threads running than cores) the sum of CPU and Wait times for each thread will always add up to 3,1s because this is the time range you did zoom into. To actually prove that this time is really waited while we were regex processing we have to sort by the wait column and then drill down for the thread 10764.

When we do this we see that all waiting did occur while the DoRegexProcessing delegate was called. It was waiting on the regular expression JIT compilation to complete. Now we have proven that that wait time is really spent while executing the regex processing stuff. If we would like to optimize the total running time we have two options: Either we could use more threads to parallelize even further or we need to tune our Regular expression or replace it by something else. Before going down that route you should always check if this string processing is necessary at all. Perhaps strings are not the best data structure and you should think about your data structures. If you need to sue strings you still should verify that the regex processing was really necessary at this point of time. Eventually you do not need the results of this regex processing stuff right now.

A Garbage Collection Issue?

In the list of marker events we see that the first regex issue overlaps with a GenerateManySmallObjects operation. Lets zoom into that one and check out the what we see under CPU usage. There we see that we are consuming a significant amount of CPU in the Other stack tag which categorizes unnamed stacks into its own no