If you're really down to asking such ridiculous first tier tech support questions, all hope is lost. Start by assuming the user is capable of connecting their computer to the internet, and forget the details, they are irrelevant to your application.

I'm trying to help you get up and running, however that doesn't seem to be what you are interested in right now. EAGLE still works and there are many people still working with it. They way I look at it, you have a choice to make. If you decide to continue to work with EAGLE, then I can assure you that myself and the rest of the support staff will do everything we can to resolve any technical issues that may come up.

If you want help with getting EAGLE to work then I'm happy to help, otherwise there's not much I can do for you.

If you're really down to asking such ridiculous first tier tech support questions, all hope is lost. Start by assuming the user is capable of connecting their computer to the internet, and forget the details, they are irrelevant to your application.

Hi Monkeh,

You can't assume anything, even in a technically savvy community like this one. Sometimes the smallest details are the ones that cause the biggest issues, in his post Karel provided nothing other than a picture. With no other information I chose to start with the simplest points. EAGLE runs on three different operating system families and each one is a minefield in it's own delightful way

I dislike Autodesk's transition to the subscription model as much as anyone here (and don't plan to switch to it at all). But Jorge is not to blame for this. He continues to offer honest technical support, in a situation where his job satisfaction must be way down. Let's be civilized here, can't we? If you have to let off steam, there's always Matt to complain to, who is actually in charge of the business side...

EAGLE runs on three different operating system families and each one is a minefield in it's own delightful way

Hi Jorge,

This was never a problem with V7 & V6.The problems started to arise with V8. So far, most of the programming work done since autodesk aquired Cadsoft Eaglehas been to make the subscription and internet license check work. And the result is a big mess.

Hope that clarifies things.

Let me know if there's anything I can do for you.

Best Regards,Karel

Logged

The difference between theory and practice is less in theory thanthe difference between theory and practice in practice.Expensive tools cannot compensate for lack of experience.

I completely agree. I've known Jorge on other EAGLE forums for a couple of years and he's always really helpful and exceedingly patient when dealing with a wide range of EAGLE users and skill levels. He doesn't deserve to take flack when he's just trying to help.

I don't envy his position. When I was a teenager I worked for a stint at a local fast food joint. When I worked counter or window and there was a mistake on an order, I was the one who got yelled at by the customer. It wasn't my fault, I didn't make his burger, but I was the one facing the customers so I had to deal with it.

I don't envy his position. When I was a teenager I worked for a stint at a local fast food joint. When I worked counter or window and there was a mistake on an order, I was the one who got yelled at by the customer. It wasn't my fault, I didn't make his burger, but I was the one facing the customers so I had to deal with it.

At some point, though, you have to wonder if your employer has your back. If the burgers are coming out tasting like dish soap...

At some point, though, you have to wonder if your employer has your back. If the burgers are coming out tasting like dish soap...

Well, yeah, and I could complain about it and upper management would defend their decision and if I kept complaining about it I'd be out of a job. Not a big deal for a kid making just over minimum wage but at some point in life it starts to matter more. Every job I've ever worked had some degree of corporate BS, even switching jobs is likely to replace one variety of BS with another.

Yes, I noticed this with github, too, apparently they use Amazon cloud services for their release download servers, so I couldn't download a release for a software (not related to Eagle). Was offline yesterday for 4 hours:

quote: "Amazon wasn't able to update its own service health dashboard for the first two hours of the outage because the dashboard itself was hosted on AWS." Brilliant idea to host the AWS outage dashboard on AWS

So there were problems on 5 days last year. Maybe better use Google cloud?

So there were problems on 5 days last year. Maybe better use Google cloud?

The only reliable solution is to revert to the old license system.

Even if that were to happen (which Autodesk has said is "non-negotiable") it would be too late of solution for me or the others who've jumped on the Altium Circuit Studio $495 deal or the Designer 40% offer. Learning curve is not too bad, and well, Eagle doesn't have enough going for it to lure people back once they've switched away, IMHO.

This is not the first time that Amazon Cloud had this problem. There is even a website which monitors and counts the problems: [...]So there were problems on 5 days last year. Maybe better use Google cloud?

Amazon really does give you all the tools to have a resilient, distributed infrastructure with no shared failure domains. They offer something like 15 isolated geographic regions, many with multiple individual datacenters within the region. If you do things properly, creating infrastructure is programmatic, so it becomes more a question of design and automation rather than manual effort to establish services in another region.

Unfortunately most people don't take advantage of the platform and just stick all their infrastructure in one region. Usually the oldest and least reliable one, us-east-1 (N. Virginia), which is where almost all of the outages occur. When people suffer outages on AWS it is almost always because of bad design, rather than a lack of tools to stay operating and available. Individual "cloud" resources are supposed to be unreliable and disposable, by design, but not a lot of people really get that. It is conceptually easier for people to design things where they don't have to deal with the concept of dynamic resources, networking between regions, replication strategies, redundant DNS providers, etc.

This will be the same whether you are on AWS, Azure, or Google Cloud. AWS just has the largest customer base, and thus the most people yelling when they shoot themselves in the foot by not having a proper architecture or DR strategy.

Even if that were to happen (which Autodesk has said is "non-negotiable") it would be too late of solution for me or the others who've jumped on the Altium Circuit Studio $495 deal or the Designer 40% offer. Learning curve is not too bad, and well, Eagle doesn't have enough going for it to lure people back once they've switched away, IMHO.

Definitely.Once Eagle lose customers, that's it, they won't be back. They will have rely on new subs. The current Circuit Studio license + maintenance is cheaper than a year or so of Eagle subscription, so I can't see how Eagle can compete with that?

It is conceptually easier for people to design things where they don't have to deal with the concept of dynamic resources, networking between regions, replication strategies, redundant DNS providers, etc.

This will be the same whether you are on AWS, Azure, or Google Cloud. AWS just has the largest customer base, and thus the most people yelling when they shoot themselves in the foot by not having a proper architecture or DR strategy.

Isn't the management of dynamic resources and all the other things essentially what makes the cloud, a cloud? Where all the detail is hidden away so as to make detailed attention from the customer unnecessary.

It isn't perfectly clear that you are not placing the responsibility for managing reliability onto the customers shoulders. Something which customers are seeking to avoid by buying cloud services.

Isn't the management of dynamic resources and all the other things essentially what makes the cloud, a cloud? Where all the detail is hidden away so as to make detailed attention from the customer unnecessary.

Isn't the management of dynamic resources and all the other things essentially what makes the cloud, a cloud? Where all the detail is hidden away so as to make detailed attention from the customer unnecessary.

It isn't perfectly clear that you are not placing the responsibility for managing reliability onto the customers shoulders. Something which customers are seeking to avoid by buying cloud services.

Well, it would be nice, but as far as the cloud platform (infrastructure) is concerned, that is observably untrue. Not having to care about the reliability of the underlying resources is a good end-user experience for someone consuming an application run on cloud services, but the people doing the running absolutely have to account for failure, which coincidentally is largely the same problem as providing horizontal scalability for serving increasing (or globally distributed) load.

For example, look at the NIST definition of cloud computing. Cloud computing is defined by its on-demand, utility model for provisioning computing resources. Reliability is not even mentioned.

Since cloud computing centers around on-demand, scalable resources, these resources are generally less reliable than a traditional enterprise datacenter model. In the traditional enterprise model, great care and expense is taken to try and make individual servers (or VMs) as reliable as possible. You have highly overbuilt, fault tolerant hardware, and hypervisors like VMware that take care of making individual virtual machines fault tolerant at a software level. But this involves high dollar amounts, isn't generally scalable on demand (to meet varying or unexpected load), and still has ultimate limits to its reliability. For example, you can spend tens or hundreds of thousands of dollars on enterprise grade hardware, but it doesn't stop an earthquake (or someone tripping and hitting the emergency power-off) from taking down the physical datacenter. It's also not as practical to just build in another floor to your datacenter and fork lift in a bunch of servers if you're expecting more traffic next week.

In the cloud model, the resources are designed to be disposable: if a VM fails, you simply replace it with another one. Relying on the functioning or state of an individual VM (or data center, or regional service) is not in keeping with the model of how those resources were designed to be consumed. Services like AWS provide building blocks that people can use to construct reliable services, but they don't provide anything like the concept of individual services with 100% reliability. For a variety of reasons, that just isn't a practical model. The VMs are run on cheap, plentiful, bottom dollar hardware, and reliability becomes the responsibility of the application rather than the infrastructure.

This is something that Amazon is pretty up front about (e.g. Building Fault-Tolerant Applications on AWS, Architecting for the Cloud), but again, there are a lot of misconceptions about what it means to move to a cloud model. People who aren't ready to architect around failure in their application would be better suited with a traditional managed service provider that caters to older enterprise-type applications, and will work with them to manage DR. They move to a cloud provider without being able to handle it, because it looks cheap, and then they complain when their application becomes unreliable.

Not sure about Google, but Azure have had far more than their fair share of outages, ISTR DNS config changes being the route cause of one or two of them. DNS is itself is a nightmarish risk when combined with fat fingers. Edit: since I started writing this post yesterday, looks like Office 365 has been out again although it's not clear if this is just retail or enterprise too.

Quote

the most people yelling when they shoot themselves in the foot by not having a proper architecture or DR strategy.

I agree, but using the same provider for your production and DR does not remove common mode failure when they are using distributed configs, and as a punter you won't have any visibility of those changes in the cloud anyway until it breaks. To do DR "properly" in the cloud necessarily makes it expensive if you're to avoid such failures, and in many cases it won't save you a dime, and can be more expensive if you end up using multiple cloud vendors to spread and reduce risk.

With cloud, the devil is in the detail. Regrettably many IT managers and management consultants who don't do detail have difficulty understanding and analysing the technical risks, but it's OK, typically they'll have floated off to their next engagement once it's too late and they've left there slug trail of destruction behind them.

On the other side of the coin, there are definitely certain situations putting non-critical and non-core services in the cloud can make a lot of sense financially. If your business can take a half day or day's hit every now and then, then that's fine. But it's very brave to risk your core business offerings there without a full understanding of the risks involved, including service level RTO & RPO, offshoring of data, and even placing data locally in the hands of an entity with foreign interests allowing foreign jurisdictions to exercise access to that data without you being aware.

Ah, yes, that outage was quite notorious, but it was contained to S3 in us-east-1 (N. Virginia). I've never seen a systemic failure in AWS that crossed a region boundary. Having discussed it with their engineers previously, the "control plane" (software, configuration, management) is segmented by region (a region being a geographical center with a set of one or more "availability zone" datacenters), with few if any dependencies between regions, for exactly that purpose -- to avoid systemic failures across the whole platform. So anyone who had a proper DR strategy in place with replication of S3 objects between regions and a solid (DNS, CDN, etc.) failover method in place was not affected.

This includes the rather large set of AWS infrastructure I am responsible for, so the "AWS outage" was a complete non-event for me. That was also good because I was on vacation!

Not sure about Google, but Azure have had far more than their fair share of outages, ISTR DNS config changes being the route cause of one or two of them. DNS is itself is a nightmarish risk when combined with fat fingers. Edit: since I started writing this post yesterday, looks like Office 365 has been out again although it's not clear if this is just retail or enterprise too.

I agree, Azure is not quite as mature as AWS from an availability or a services standpoint. There has been a lot of churn in their platform implementation in the last few years. Of course trying to make the traditional Microsoft services (AD, SQL server, etc.) both elastically scalable and highly available is also very challenging in ways that AWS doesn't have to deal with. Microsoft has a lot of baggage there.

O365 especially is notoriously unreliable, and unless you are lucky enough to have a direct line into Microsoft, support is horrible.

I agree, but using the same provider for your production and DR does not remove common mode failure when they are using distributed configs, and as a punter you won't have any visibility of those changes in the cloud anyway until it breaks. To do DR "properly" in the cloud necessarily makes it expensive if you're to avoid such failures, and in many cases it won't save you a dime, and can be more expensive if you end up using multiple cloud vendors to spread and reduce risk.

Right, I mean, if you want to be ideally protected you have vendor diversity, control plane diversity, geographical diversity of your administrative team, etc. It can get impractical. But even solely within AWS, taking advantage of the region partitioning (above) and carefully considering your other points of failure (like DNS) gets you most of the way there in terms of practical uptime; meaning five-nines (99.999%) availability of the infrastructure in aggregate is quite achievable. Availability at that point ceases to become an infrastructure issue and tends to become more of an application reliability issue.

But you're still worlds apart from a traditional enterprise datacenter solution. As expensive as DR is in AWS, traditional DR is even moreso, since you're on the hook for the costs of the datacenter facilities and hardware up front, whether you are using it or not.

Some workloads are definitely better left local. But I'd still say that the expertise necessary to competently run a physical datacenter, with all the facilities maintenance, networking, and systems design concerns still presents a large and tangible risk as well. Entire datacenters become disabled all the time due to generator failures, bad UPS maintenance, cooling issues, cheap and poor networking design, limited upstream capacity and DDoSes, etc. So many people and so much expertise is required just to keep the lights on, and most companies aren't willing to do it properly.

Five nines type availability is a difficult engineering exercise no matter which way you do it, but for the competent and informed I still think services like AWS make it more accessible.

We apologize for interrupting your workflow, as you have noted the issue has been solved.I am trying to find out what caused it, the developers did their best to get it up and runningas fast as possible blah blah blah...

We apologize for interrupting your workflow, as you have noted the issue has been solved.I am trying to find out what caused it, the developers did their best to get it up and runningas fast as possible blah blah blah...

Wow... this is incredible. Even if you are just an isolated case it still gives me the chills to see these kinds of issues. When I am working on a board I *really* don't want to be interrupted, whether or not there is an external deadline bearing down on me.