The IPKat

Passionate about IP! Since June 2003 the IPKat has covered copyright, patent, trade mark, designs, info-tech, privacy and confidentiality issues from a mainly UK and European perspective. Read, post comments and participate!

The CIPA President, Julia Florence, announced at yesterday's East of England meeting that CIPA will be conducting a review of the training, support and assessment of students. This was confirmed in a written statement from CIPA council released later in the day (see here).

CIPA's statement is in response to the unprecedented level of criticism the PEB have received over this year's FD4/P6 paper (see the comments on IPKat's post here). The statement also indicated that the Council:

"recognises that communicating the low pass rate and the revision of the pass mark during EQE week was not appropriate given the existing pressures on candidates sitting the European exams. Council has asked the PEB to ensure that its communications are better timed in the future."

The CIPA Education Committee will be conducting the review of student assessment, working with IPReg and the PEB. No further details are yet provided as to the form of the review.

Many have commented that the P6/FD4 (and the UK Finals overall) are not fit for purpose and should be replaced, for example, with a University course. However, it seems to this Kat that one of the main issues with P6/FD4 may have a very simple solution. This year's paper highlighted that a key challenge in passing P6/FD4 is coping with the unpredictability of mark distribution. The seemingly random weighting of the mark scheme to different areas each year (e.g. construction, novelty or inventive step) introduces a considerable element of luck as to whether a candidate chooses the correct area to focus on. This problem is, of course, exacerbated by the highly time pressured nature of the examination.

FD4 2018 related to gantries and overhead cables

In this year's paper, for example, the mark scheme reveals that there were 25 marks available for inventive step. Inventive step has in previous years been allocated far fewer marks (the 2018 paper and mark scheme can be found here). Candidates who spent more time considering novelty or infringement would have had less time for inventive step. Less time spent inevitably results in fewer marks.

An easy solution to this problem would be for the paper to indicate how many marks are to be awarded for each section. FD4/P6 purists may argue that this would introduce an artificial feeling to the examination. However, this Kat posits that the time pressure aspect is itself artificial. Indicating the mark distribution in the paper, far from introducing an artificial aspect, would on the contrary go some way to addressing the artificial nature of the paper introduced by the time constraints.

The decision of CIPA to review the paper is welcome, and this Kat will follow developments with interest.

IPKat is aware that P6/FD4 elicits strong emotions. However, meaningful reform will not be achieved by empty vitriol. This Kat therefore hopes that CIPA's promised review and engagement with candidates will facilitate a meaningful and civil debate.

137 comments:

P6 is hard but doable. There would not be a UK profession if it was completely impossible. A competent attorney should pass this after a couple of attempts. P6 is not representative of a real life I and V opinion, but it tests many of the fundamental competencies. Outsourcing to a university would most likely make the exam more expensive to sit and also reduce the difficulty. I hope PEB does not make a knee-jerk decision based on a few anonymous comments on an IPkat article.

Regarding the importance and righteousness of these Exams I would like to draw your attention to the fact that the legislators decided that the patent profession was anti-competitive that they need to do something.

In order to do this they removed the requirement for people to be registered patent attorneys in order to represent people before the UKIPO.

Some might say that the professions continued demand that people pass these exams is those in power trying to hold onto that anti-competitive past.

The EQEs are easier than the PEB exams, as is amply demonstrated by the different pass-rates of the two. Ability to pass one should not necessarily guarantee ability to pass the other - if it did there would be no point having separate UK exams.

I agree with the OP that comments on an IP Kat article demonstrate little. I don't think the PEB exams are bad. Compare them to the EQEs, where the granting of marks for some mistake or other of the examiners is an annual occurrence.

I also don't think that universities are suitable custodians of entry to the profession. They have not proved so in other areas of the law. The rampant grade inflation coupled with equally rapid increases in fees does not show them to be so.

As for anti-competitive forces, the most anti-competitive is the complexity of patenting itself that requires representation by patent attorneys in practice. I think only something like 5% of self-filed patents actually get granted. And no-one is genuinely considering changing the law.

An important caveat: the exact weighting of marks for the year isn't actually known when the exam happens. The mark scheme is only finalised at the examiners' standardisation meeting, following the exam, conducted on the basis of a first pass at marking some representative papers.

It would be rare that the proportions of marks available in a section changed drastically between the initial drafting of the mark scheme and its final standardisation. However, suppose a situation arose in which, for the sake of being fair to the candidates, exactly this sort of shift of marks needed to happen: it would be much more difficult to justify doing it if the exam paper had already gone out with provisional mark weightings on it.

In my view, approaching the exam with an intention to replicate things which worked in previous years, in the hope that they will work this year, is a mistake: it's an application of "cargo cult" logic to the situation. The advice given by the candidate, and the weighting of effort in the exam, should be based strictly on the situation in front of them.

Past papers can offer some guidance, but this is not an exam which can be passed by mimicking the approaches that worked with those past papers by rote; for instance, where some types of advice and strategy available to the client might have made perfect sense in achieving their ends in some years, in other years those avenues, whilst legally available, aren't actually helpful in getting the client any closer to the outcome they require.

In other words: think like an attorney, come to your own conclusion, present your arguments and conclusion. If you are thinking like an attorney seeking to give the best advice to their client, I genuinely believe you will do better than if you are treating the exam as an abstract exercise in mark optimisation.

Attempting to "game the system" by second-guessing the thinking behind the mark scheme wastes time you could be spending on thinking about the problem in front of you, and I would be personally against anything which nudged candidates into a such a mindset - it would do them a disservice.

Couldn't you resolve the weighting issue by marking the exam scripts of qualified attorneys who have sat the paper in advance? It's my understanding that they are sat in advance at the moment, but not marked, although I might be entirely wrong on this point.

Also, just to highlight your comment, if you are thinking like an attorney trying to give the best advice to your client you wouldn't do it in exam conditions in five hours. If this were course work or an assessment with a less limited time frame I would completely agree with your points but that is not the case for this exam.

If your clients always give you instructions with plenty of time to complete them to the best of your ability, then I congratulate you and I am very jealous of you. Naturally, I mean "the best advice given the constraints you are under", rather than "the best advice a human being could possibly give the client".

The weighting issue could be addressed by qualified attorneys taking the sample papers... but then we'd be doing the weighting based on a group of people who have all passed P6 in the pass, rather than the entire set of people sitting the exam (which in any particular year will, of course, include people who absolutely are not ready to pass and be qualified attorneys). It's a different test.

Even then, I think it is possible to put too much weight on the weighting. One of the regular issues raised in the examiner's report on P6/FD4 is that the general standard of inventive step answers is poor.

Whether the total marks available for inventive step is 15, 19 (as it was last year), or 25, does it actually matter if the sort of answers people are offering up are often not even worth double figures?

I don't believe that thinking like an attorney will help with this exam. In my experience the requirement is to think like THE attorney who set the exam. If you disagree with their construction or their understanding of the material you will fail unless you are very lucky and get one of the seemingly few examiners who will actually look for marks in your alternative understanding.

If patent construction was as clear cut as the mark schemes and the Examiner's seem to think why would we need judges to decide these things in the real world?

Its clear that the problem with this paper is the amount of "thinking time" and then the amount of "time to write" the answers. If the Examiners are really looking for a competent patent attorney, they should allow sufficient time for them to complete each section. After all, in real life, if you have that little amount of time to do an I and V analysis, you would ask for HELP! from your colleagues.

If what you said concerning construction were true, you would expect there to be wild variation in the marks people got in the construction section. Those who hit the same construction as the attorney setting the exam would get a high mark, those who missed would get a low mark.

If we actually look at the examiner's report for this year, we find that this variation didn't happen. "In general, candidates scored reasonably well in this section", it says.

If we are going to make arguments based on what the mark schemes and examiners say about patent construction, let's look at what they actually say about construction.

Relying on what the Examiner's comments say seems like folly to me - particularly when its not backed up with any evidence.

The Examiner's comments often say insightful things like "It is not felt that timing was an issue as most candidates managed to answer all questions" and yet if you ask ANYONE who takes the exam they will tell you there is not enough time to properly consider the paper.

Surely this shows you how out of touch or willfully ignorant the Examiners are and consequently how much faith you should put in the rest of their comments?

Relying on the examiner's comments when it comes to statements like "In general, candidates scored reasonably well in this section" would seem to be safe, no?

Unless you believe that the examiners are actually misrepresenting how well people scored in the construction section? That would seem to be a rather more serious matter than whether weighting of marks should be indicated on the exam paper.

No, Arthur is quite correct. Ethics requires that if your advice has limitations (e.g., you didn't have time to search the prior art properly, you only had time to skim-read a particular document) you advise the client of those limitations and state that more work is required to give a full view, identifying what that work is. But you still state what your view is at that point, with appropriate caveats.

There appear to be complaints that the examination is not fit for purpose, but it should be remembered that it is to test whether candidates are fit to practice.

The suggestion that marks for different parts should be indicated in advance, and that there is "a considerable element of luck as to whether a candidate chooses the correct area to focus on" seems to ignore that one element of fitness to practice is recognising what is important, and what is less important, and applying appropriate level of attention to the circumstances. "Luck" has little to do with a professional attitude, which requires having adequate preparation that one can deal with the issues presented. If one has prepared for infringement arguments and inventive step arguments, then the appropriate guns can be brought to bear on whatever distribution of attention is required.

What I find disturbing in the present case is that whereas the examiners have marked the papers, with a pass mark in mind, and have assessed candidates as to their fitness to practice accordingly; the PEB have seen fit to lower the pass mark, and thereby pass candidates the examiners thought not fit to practice. Further, this applies to all affected candidates and does not seem to address the issue of why a candidate got below a normal pass mark.

This damages not only the examination system, but also the candidates who passed this year with below the "normal" pass mark. Whether employers will be put off by any doubt as fitness to practice by having a below "normal" mark, or whether they will be impressed that the candidate found their way through a meat-grinder, will depend on the employer.

Having been an examiner myself, and having worked on the basis of trying to pass anyone who had reasonable knowledge and was not positively dangerous, I find this second guessing of the examiners deeply disturbing.

You comment as if candidates who scored 47-49 in this years paper are not fit to practice. Taking their percentile position in any other year would result in a good pass.

If is reasonable to expect that candidates this year are broadly equivalent to those in any other year. Furthermore, pass rates in the other FD papers were improved from previous years, suggesting that the low pass rate in FD4 was not simply down to a less skilled cohort.

The Middlesex study covered exactly this point! It was found that how examiners mark is not consistent.

I would argue that the inter examination variation is the main problem with FD4.

A low pass rate is vexing but an exam where some/most examiners comfortably pass a script but a minority of examiners will fail the same script is simply not fit for purpose. Indeed, the examiners openly admit to having contrasting answer preferences to other examiners.

You must have an exam where candidates know what is a good pass. Presently you do not.

The distribution of marks is decided according to the examiners' preferred construction. If you deviate from the preferred construction, you may be penalised a few marks in the construction section of the paper which might not be fatal. What might be fatal is the fact that the distribution of marks is fixed, because under your (non-preferred) construction, inventive step may be less of an issue than e.g. infringement.

If you construe a term broadly, you can capture more potential infringing articles. You might then spend a long time discussing all the possible infringements when there are actually relatively few marks available, even though all the points you make are valid and consistent with your construction. You might even get a great mark in the infringement section, but if the mark distribution is fixed based on the preferred construction then you can effectively waste a lot of time writing completely valid analysis.

What is really unfair about this exam is that if the preferred construction means there are clearly fewer possible infringements, candidates who construed more broadly (even if it’s an arguable term that could go either way) are penalised multiple times - once in the infringement section of the paper, which is fair enough - but they are also affected in the other sections due to the time wasted applying their construction to all possible infringements.

When there are arguable terms in the claims that can affect the number of infringements (which was arguable the case this year), the examiners should be able to move marks from one section of the paper to another.

The examiners always say that the best candidates chose the right words to construe and dealt with the issues well and scored highly as a result, but it's completely circular logic. People who get great marks might actually be fantastic patent attorneys and knowingly pick the right terms to construe and choose the correct interpretation based on their brilliance - or it might just be a case of infinite monkeys with typewriters.

You would presumably agree, then, that publishing the mark weightings in the paper would be counter-productive? After all, if you consider it important to give examiners the liberty to shift marks from one section of the paper to another based on people's constructions, then implying that the weightings are already set in the paper would be even more misleading under those circumstances than it would be at present.

That said, being able to award more marks to a section than it is allocated would only become useful if candidates were already scoring sufficient marks in those sections to hit 100% of the available marks, and were still making additional useful and relevant points to the situation at hand (as opposed to repeating chunks of law or gobbets of advice for the sake of it, regardless of relevance to the client's needs). In my experience this does not happen.

The pass rate was 33% this year, which is low. Significantly more than 33% of the candidates who sat the paper will have been good enough to pass. Saying that good scripts can fail because of the fixed distribution of marks is not equivalent to saying all people who passed are lucky and the infinite monkeys comment is basic mathematics.

This approach would make sense if the weighting for each section roughly equated to the effort required for the respective section. That is not the case, however. The construction section, which is vital for the remainder of the answer, takes candidates typically 2 to 2.5 hours to complete, yet should take only one hour based on the marks allocated. There has been, over the past number of years, a reduction in the so-called "easy" marks that are given for getting the basics such as dependencies correct. Further, points of contention in construction that would previously have been worth 1 to 2 marks are now worth a mere half mark. This makes every mark a fight, and is perhaps why we are seeing a significant shift downwards in the distribution of marks.

How long will the CIPA's promised review be? Would there be sufficient time to incorporate any comments, feedback and review this into next years paper.

I don't want it to take as long as the Universisty study which by the way, I'm still unclear as to what has been implemented as a result of the recommendations made in that report, apart from the minimum pass descriptor.

It seems that there are some sitting this exam who feel entitled to be handed a pass on a plate. It has always been a difficult exam. Consistency across different years is clearly an issue, but at the end of the day, one or two resits are usually enough for most to pass.

Time is a massive issue with this exam and PEB would need to address this. There is no need to push candidates to the wire with this exam. It's not useful to do that and its not real life. Like someone said before, in real life - you get help from others if you only have 5 hours but this has never happened in practice (and I be happy to be corrected).

get rid of sufficiency in the exam too. Its 1 or 2 marks and there is no point testing this.

Construction, infringement, novelty, inventive step, amendment and client advice should be the main aims of this paper.

"It seems that there are some sitting this exam who feel entitled to be handed a pass on a plate. It has always been a difficult exam".

Yes, it has always been difficult, and as a trawl through the past examiner's comments will reveal, those sitting the exam have always reacted in the same way.

I particularly like P6 1997, where the examiner gripes that "many candidates, particularly those who have been unsuccessful, are of the opinion that the paper sets an unrealistically high standard with the objective of keeping the pass rate low".

You only have to take a look at the comments on here which are presumably from senior members of the profession / examiners to realise that nothing is going to change. They don't think there is a problem. Almost every single trainee who has attempted the exam in recent years, regardless of whether they have passed or failed, say there are serious issues with it. Who is right?

The same comments are being made by these senior members: trainees want a pass mark handed on a plate, trainees are not preparing enough, trainees are just not smart enough (this is implied rather than overtly stated), subject matter doesn't matter (usually accompanied by some specious argument), in our days the pass rate was 10% and we never whined, the exam is very difficult but fair, and my personal favourite: everyone competent passes within two attempts.

I am not going to bother listing out counter arguments because others have done so many times over on this and the other blogpost.

My personal take is this - focus on the EQEs. Forget about becoming a chartered patent attorney. As someone pointed out, you don't need to be a CPA to act before the UKIPO.

Firms that don't care about CPA if in possession of EPA qualification, please identify yourselves and prepare for a deluge of applications.

"Almost every single trainee who has attempted the exam in recent years, regardless of whether they have passed or failed, say there are serious issues with it."

And this is evidenced by....

I think there's problems with it (having to construe every integer rather than the ones of actual relevance to infringement/validity being an obvious one). I still think it's a better test than Paper C, which is far too amenable to learned strategies of no relevance to practice (witness the profusion of pre-prepared tables and highlighting at last week's exam, learned in expensive training courses such as that run by Deltapatents).

As for focusing on only the EQEs, circumstances often change and over-specialisation in any profession is a bad move since it takes away your ability to respond to that change.

As flawed as the PEB exams can be, they are still the most real-to-life test of a patent attorneys trade that I know of.

In the United States the USPTO exam is purely multiple choice and based on MPEP. No-one seriously thinks this tests their ability as patent attorneys.

The national exams in Italy, Germany, and other countries of mainland Europe are seen as relatively easy compared to the EQEs. I think the lower pass rates of EQE candidates from those countries compared to UK candidates may be explained in part by this.

The EQEs themselves have a number of major deficiencies. Most obvious of which is that Paper A still does not actually require you to draft a full specification. I think a lot of clients would be surprised that, in theory at least, you can qualify as an attorney without actually having drafted a patent application in full.

I think a large part of the outrage against P6 comes from the fact that, for many candidates failing it, it will have been the first exam in their entire lives in which they did not get a pass mark, or at least the first one of real consequence. The adjustment in this case is no worse than the adjustments applied with depressing regularity in the EQEs for unclear wording of the questions.

I think the proposal to tell candidates what the mark scheme is risks taking away an important part of the exam: an attorney is supposed to be able to judge what is and isn't an important part of the case to focus on in their arguments.

You're forgetting that lots of people fail their driving test first time, which has a real consequence but isn't so much of a problem as you don't have to wait a whole year to resit it.

One thing FD4 could take from the EQEs is simplicity of the inventions. I have never heard anyone complain that they had difficulty understanding what the invention was in the EQE. This cannot be said for FD4, and it really doesn't help with the time pressure. I failed this year and it's frustrating, but based on the mark I got, next year I will pass if I actually understand the subject matter. I think I would have with a bit more time, but rushing through the paper in the time available meant I missed a couple of points that mean I am a few marks off the adjusted pass rate. As a chemist I have found the inventions in the EQEs much easier to understand than this years FD4 paper, and I don't think it detracts from the questions asked since it's meant to be about legal principles. The FD4 envelope paper a few years ago would be a good example of making it accessible. I just wish it would have been possible for me to sit that paper instead of the 2018 paper.

"I have never heard anyone complain that they had difficulty understanding what the invention was in the EQE."

I refer you to the never-ending complaint, repeated every single year, that the invention was mechanical in nature and thus hard for people in [insert non-mechanical area of art here] to comprehend and thus they didn't get it. Seriously, go and check out the survey feedback on the EPO website -

2018: "With a scientific background in biology and chemistry, why paper A is a specific Mechanic paper? Why sucha discrimination?"

" the subject matter ofthe claims was very mechanical/electronics based with a lot of “means for” language, which is veryinaccessible for those with a background in chemistry."

" Paper C 2018 regarded a method and apparatus of cleaning udders,and milking a cow, this is very obviously a more complex scenario and requires a far greater degree ofunderstanding"

"The paper should target testing the legal knowledge of the candidates. If the subject matter chosen is notfamiliar for the candidates, as it was not this year, candidates need to invest too much time in understandingthe technology first"

"The Paper C exam is a test that does not assess the skills of the people in the field of filing oppositions to aEuropean patent.I can not understand the strategy of this exam (difficult topics to digest (milking robots)...."

2017: "I sat paper C only this year.; Main weakness during exam was linked to the ""mechanical""side of the paper, including some claim interpretation with which I'm not familiar (moreinvolved in chemicals in day to day work)"

"Paper C was difficult for a chemist this year"

"Paper C was very mechanical. Many of the terms in the paper were difficult to understandfor a person whose area of work is not mechanical."

2016: "I felt time pressure during the exam (paper C). This exam is very complex ... The language of the paper is not easy and sometimes is very technical (like in 2015)."

2015: "Analysis of multiple document in paper C if features are mostly mechanical/related toengineering" (in answer to the question about what gave the greatest difficulty)

2014: "Paper C this year presented a very complicated factual situation compared to other papersin recent years"

And so on. In fact it's probably the biggest complaint about Paper C after the time constraint. You also see people complaining that a particular learned strategy didn't work or there wasn't enough time to execute it (especially the CEIPI strategy) which is hardly the EPO's fault.

Additionally, this exact topic (are chemists handicapped in exams using simple-mechanical inventions?) has been studied and no marked variance in pass-rate has been found, either in the EQEs or in the PEB exams. This is why Papers A and B were switched to being a single exam rather than having separate chemical and mechanical papers.

I think anyone coming out of the woodwork to praise FD4 should sit a modern paper in the allotted time, and then tell all trainees with a straight face how reasonable the exam and mark scheme truly is.

Since retiring, examinations are now only of academic interest: however, I have some sympathy with P6 candidates, having passed on my 11th attempt! I sat the very first P6 exam (the one where candidates were supplied with actual examples of the prior art paper clips etc.) and got 47%. I found that my paper clip exhibited no barbing action whatsoever, and noted the fact in my script; however I later heard through the grapevine that anyone not considering that the paper clip exhibited barbing action, would not pass. On testing various paper clips in the office, while some did exhibit a definite barbing action, others did not. Ironically, it is possible that, had I had a "barbing" paper clip (as the examiner presumably did), I might well have passed on my first attempt.

In subsequent years my P6 marks exhibited an almost perfect negative correlation between my increasing experience in real-life infringement and validity issues and my P6 marks. Part of my problem lay with the fact that my trainers taught the techniques they had successfully used when they had qualified, and fashions had changed.

One always used the "Table" method, where the answer could consist mainly of a table. Another used the method of breaking long claims into lettered sub-paragraphs for easier referencing of features. The year I tried this, examiners' comments said that this approach was calculated to annoy the examiner. Another year a fellow I was working with who had set and marked several of the pre-JEB I&V papers corresponding to P6, gave me a tutorial. His approach, which was indeed used by our Inns Of Court counsel, was to identify 5 or 6 key features and concentrate on them, only briefly addressing the minor points. I think I got 19% that year.

Then there was the oil seal question with its physically-impossible Escher-like drawings, one side of one figure showing upstanding ridges, the other side showing recessed valleys, with the lines depicting the peaks of the ridges of one side mysteriously turning into the valleys of the other side. I don't think the examiners liked my comment that the omnibus claim (covering the invention as shown in that figure of the drawings) couldn't possibly be infringed as it related to something that it was impossible to construct. That year my supervising attorney did complain to the JEB, but it transpired that the exam scripts had been destroyed by a secretary who needed the space. At least it did prompt them to institute an appeal procedure, having belatedly realised that none actually existed. The JEB could see nothing wrong with the drawings, which from my viewpoint as a professional engineer well versed in the production and reading of mechanical engineering drawings, was somewhat alarming. I did knock up from Plasticene in 10 minutes the two different configurations and two sets of drawings, one of which depicted what the JEB said was impossible to create using engineering drawing conventions. Quis custodiet ipsos custodes?

Then there was the brick question. The set of drawings in the copy of the question with examiner's comments that appeared in CIPA, differed from the drawings supplied to candidates, giving rise to doubt as to which set the examiners had based their marking on.

In the mid-1990's I attended the CIPA panel meeting to discuss P6 with students. The report in CIPA did not mention the disagreement between the panel as to the appropriate form of answer. One said that brief notes were all that was required, whereas another said that such notes should only be the basis for a more fully written-out answer. Attendees were left none the wiser as to what was required.

I passed when tutorials were organised, hosted by those actually doing the marking, so you could find out what style of answer was in fashion that year and give them what they wanted.

Back numbers of CIPA show that there was much criticism of the pre-JEB exams in the 1970's. Plus ça change, plus c'est la même chose.

Part of the issue is that some firms will only pay for 1 resit or only for the first time u sit the exam. It seems that firms do not want to fully support candidates. If these exams take on average 4 to 5 times before u pass, and they are required to pay and fund their own training after failing - that's a lot of mental, financial and health strain on a candidate.

I am a repeat resitter of FD4 and so I think I can provide some insight into its problems.

Firstly, the firms do have something to answer for. The exercise which is practised in FD4 seldomly arrives on a candidate's desk as when that work does come in, it is passed to a partner or other senior colleague and is not given to a "trainee". Therefore, the actual practice of preparing infringement and validity opinions is quite an unusual task for a "trainee" to do - when you add exam conditions into the equation it becomes a toxic mix of anxiety and stress which makes FD4 doubly as difficult because the exercise is nowhere near as well practised as the preparation of a draft specification, for instance !

That is to say, candidates are not getting the practice they need. Personally, I have been in practice for 10 years and only recently got my first infringement and validity piece of work. The firm does get the work but partners swallow it up to boost their billing. Unsurprisingly, I am a repeat resitter of FD4.

Secondly, when candidates are preparing for FD4, obtaining a proper tutorial or proper feedback for a practice paper can be a nightmare. I have been known to hand over practice papers in April and not get anything approaching feedback until mid-September. Even the feedback I get is a passing remark that says "You will be fine". This is not a one-off either - this is every year of the last 5. The firms with the better training structures where senior colleagues are expected to provide proper training unsurprisingly don't end up with people like me who are stuck in the system whilst our peers are making partnership.

I am not saying my repeat failure of FD4 is everybody else's fault - but merely saying that proper training for the exam is just not there outside a select few firms. If CIPA are to properly review this process then the behaviour of firms toward their trainees should be examined. The attitude that trainees are "bottom-feeders" permeates many firms and it means that trainees, rather than being developed into well-rounded attorneys, are merely seen as fodder for work that cannot be billed out. For instance, I was told after failing my UK exams the first time that the firm were quite happy I had failed as I would still be just as good at my job but they could pay me at trainee rates. Charming !

One solution may be to transform FD4 into a project based assessment - the PEB could make it more fiendish if they wanted to as the exam setting would be removed. The project could be set up like the FD4 exam but enabling candidates an amount of time to properly prepare the answer would be a better test of their skills and definitely more realistic. The PEB could even assign independent mentors to candidates and monitor the situation to ensure the interaction was actually happening.

I entirely agree that the P6/FD4 problem is in large part the fault of over-specialisation within larger firms. I'd add the pressure to sit all exams together, too early in the career of the candidate, also makes failures likely.

Not sure that there's either the resources or the time for a project-based assessment, though, and it would have the effect of making only those organisation capable of sustaining candidates through this process able to qualify UK patent attorneys.

The PEB have now started giving out mark distributions when results are issued. That is to say, telling a candidate where they have score their marks. This is helpful - but what would be more helpful is to actually find out why that score was allotted. I have sat the exam 6 times now and I am not really any the wiser as to why I keep going wrong.

I have failed twice now and feel the same. Each time I have entered the exam hall with confidence (professional experience of I+V, sound understanding of construction, novelty, inventive step, etc., past papers being marked high, tutorials with examiners) and also left the hall with confidence (finishing most of the paper, making reasonable decisions based on construction). Then the results come out.

It is hard to come to terms with failing an exam where one has worked hard, understands the material, and has had no inkling at any point that failing would be the outcome. One feels hopeless. Not being able to string together what went "wrong" only adds to this, and makes the next attempt seem equally fruitless.

Releasing marked scripts would help to provide the level of transparency for FD4 that candidates deserve. Personally, assisted by the mark scheme, myself and my colleagues have been unable to get anywhere near the mark I was awarded, so it would be great to know what the examiner was truly looking for (or whether they bothered looking that much at all, which at present is seemingly the case).

At present PEB blanket refuse to release marked scripts. I cannot think of a legitimate reason why these scripts aren't available, only malicious reasoning. My own personal view is that this is to avoid it being discovered how variable the marking is...

This resonates with me. I can see that I did not obtain certain specific marks in the mark scheme, but I also appeared to gain zero marks for advice which is slightly different to the mark scheme but which is completely consistent with my construction. I have had two experienced tutors mark my script and they have given me 9 and 11 marks more than the FD4 examiners, because they actually checked whether subsequent sections of the paper were consistent with my construction. In contrast the FD4 examiners don't seem to have looked for these sorts of marks at all.

I think you're right about why PEB don't give out marked scripts any more - a few years back one of my colleagues requested theirs and the difference between the two markers was 14 marks! No examining body should be allowed to get away with that.

If you look at the PEB marking information, you will see that it is not possible for the examiners to have awarded marks that diverge by that amount. There is a threshold for consistency between examiners, and 14 is well outside that.

"Where both examiners have awarded a pass mark or both examiners have awarded a fail mark and the difference between the marks is 11 or more (5 or more for FD1), the Principal Examiner will ask the examiners involved in marking to review their marks in discussion with each other and the Principal Examiner."

A couple of points for consideration. Firstly marks can still differ between Examiners by as much as 11 marks - this seems very high given a pass mark of 50. Secondly, we have no idea how the reconciliation process (i.e. when one Examiner awards a fail and one Examiner awards a pass) was done in the present scenario where the pass mark was moderated to 47. Were the Examiners aware of the pass mark being 47, or was this a retroactive decision to moderate to 47 given the low pass rate? It would be helpful for PEB to provide these answers.

I beg to differ. In view of the PEB marking information, it is certainly possible for two examiners to have awarded 14 marks different. The only change is that, because the 14 marks is more than the threshold difference of 11 marks, there is an extra step of "fudging" an overall mark, rather than averaging the two marks to provide the overall mark. This "fudging" has only recently been introduced, so too has the lack of transparency for marked scripts.

To the comment about whether the examiners were aware of the change of the pass mark, as far as I understand, they were not. The lay members of PEB decided to lower the pass mark, much to the surprise of the examiners.

This does raise a whole host of questions. First, according to PEB policy, where a candidate scores 47-49, especial attention is given to determine if that candidate should pass. This was done presumably before the change in pass mark, and is consequently unfair to candidates getting those marks, and also marks in the range 44-46. Second, why was no action taken about the pass rate at an earlier stage? Surely the examiners realised that the marks they were awarding represented a significant shift down from previous years during the marking process. Third, if the examiners did not know about the change in the pass mark, then it cannot have been examiners who decided on which scripts passed the "minimum pass descriptor".

The cynic in me believes that 47 was selected not because those scripts met the minimum pass descriptor, but more that 47 resulted in an overall pass rate and reduction in pass mark that in combination would not embarrass the PEB too much.

Many candidates have asked PEB to clarify exactly how this marking was implemented, especially for those who borderline 47, yet no answer has been given. It seems there was simply no consideration of those candidates who straddle the revised pass mark- as someone sitting in that bracket, close to 47, this is deeply disappointing. Furthermore, the "minimum pass descriptor" was never intended to be used for papers for which there is a noted issue- how can candidates be expected to "present most key information" when they have been given too much information to handle in the time? No information has been provided by PEB on how to assess candidates on the basis of a flawed paper.

The PEB knows that the firms who foot the bill would probably start to ask questions if the pass rate were in the low 20s as has been rumoured. I tend to agree that they have probably lowered the mark enough to avoid complaints from the big firms without actually considering the function of the exam.

There is no doubt that far more than 20% of candidates are safe to practise, so it dubious that an exam that produces such a low pass rate is actually testing what it set out to do. However, the solution applied by the PEB is also dubious.

It seems to me that the PEB (and the examiners who set the exam) have forgotten that its job is to assess whether candidates are fit to practise. The paper has become an academic exercise and the fix applied is, frankly, pseudoscience. It is a sad state of affairs and the people who ultimately suffer are the candidates who failed despite being perfectly capable of practising as patent attorneys.

Comments from the Middlesex P6 exam review:“The findings revealed a level of uncertainty among most of the examiners about how effectively the learning outcomes are met by the assessment process. Moreover, the way the exam is marked does not allow any inferences to be made about whether trainee patent attorneys have met the learning outcomes.“And further down:“Any form of assessment must ensure that those who pass can be shown to have met the threshold standards with respect to each learning outcome. The research team found it difficult to discern that this was occurring.”

Nope, nothing to see here. P6 is just hard, that’s all. Anyone saying otherwise is just a trainee expecting qualification to be handed to them.

I sat and passed P6 relatively recently (2015) and am inclined to agree with the various posters above who are resistant to allocating fixed marks to each "section" of the paper. As they rightly note, the level of detail appropriate to a novelty/obviousness/infringement analysis depends crucially on the construction of the terms in the claim.

I also disagree that there is somehow a presumption that one must arrive at the same construction (or a very similar one) to the examiners in order to stand a chance of passing. My own construction when I sat P6 differed considerably from that of the Examiners, as did my conclusions, yet I passed. Anecdotal evidence is not data but this does suggest that (in that year, at least!) the Examiners are prepared to be flexible provided that one's reasoning is internally consistent.

Despite these caveats, I do feel that the P6/FD4 exam is flawed in at least one key respect. Every year the Examiners complain that candidates' answers on obviousness were poorly-reasoned, rushed, brief, missing altogether, etc. - and every year the number of marks allocated to obviousness seems to increase, presumably in an attempt to provide an incentive for candidates to spend greater time on this section. This seems entirely to miss the point that it is not possible to do a proper obviousness analysis until the claims have been construed and assessed for novelty, and yet each year we see multiple independent claims, multiple embodiments of the invention, and multiple prior art embodiments. This profusion of issues simply increases the time taken for construction and novelty and, moan as the examiners might, what candidate in their right mind is going to skimp on these sections in a rush to get to the obviousness analysis?

One alternative solution would therefore seem to be to cut down on one or more of (a) the number of claims; (b) the number of embodiments of the invention; or (c) the number of prior art embodiments,without otherwise substantially affecting the structure of the exam or requiring a rigid mark scheme.

The Examiners need to ask themselves whether the intention of the exam is to be focused on construction *per se* (as the profusion of claims and "inventive" embodiments leads to) or rather on the *application* of that construction to the consideration of validity and infringement (which is surely the intention). Yes, claim construction still needs to be tested, but it would still be possible to create a reasonable construction test even if the number of claims were cut down.

Alternatively, if the examiners insist on retaining the current multiplicity of construction/novelty issues, would it not make sense instead to split the paper into two, with one paper focused on construction, novelty and obviousness, and another paper focused on an infringement analysis (perhaps relating to a different set of claims?) in which the candidates are provided with a suggested claim construction? (Since we are not supposed to construe the claims with an eye on the alleged infringement in the current exam set-up, anyway, I don't see any significant harm in making the validity and infringement exams independent of one another...)

I wonder whether making FD4 an open book exam would help here. Paper C presents it’s own challenges, but one thing that in my opinion made it more approachable than FD4 was the ability to bring in pre-prepared templates / tables etc to help write up the answer. I don’t think making it open book would fundamentally change what FD4 is trying to test but might help with time management and generally settle candidates nerves when entering the exam.

For what it’s worth, I passed FD4 last year on my second attempt. Personally, I think I wasn’t good enough to pass in 2016 and it was the year of training and the fact that I could focus on just FD4 the second time round that was key to getting me over the line. I definitely agree that it is a difficult exam but if kept at the standards used for 2016 and 2017, I wouldn’t say it’s not fit for purpose.

PEB themselves have confirmed an "issue" with this years paper- yet the focus is on the candidates on whether they are good enough and whether they are seeking an "easy" pass. Why is no one asking questions to PEB? What was the issue (they have provided no explanation as yet) with the paper? Why did it happen? How exactly did they conclude 47 was a fair pass mark when it seems they have set a paper that was fundamentally flawed?

The issue was the candidates didn’t get the marks. The solution was some were passed even though they didn’t get the marks.Was the problem with the candidates or the paper?I think the problem was with the PEB that saw fit to pass people the examiners considered not fit to pass. If there had been a re-marking to determine whether anyone just missing could be elevated, then I would see that as a reasonable approach. But to simply lower the pass mark was both irresponsible and idle.

This. It seems there are fundamental problems with the mark scheme i.e. the recent trend of reducing the number of points awarded for significant aspects of discussion, particularly in construction, the leeway variability in interpretation of the mark scheme, whether one even has to stick to the mark scheme at all (the examiner's comment suggest that reasonable comments will be taken into consideration), that 60 seems to be the best available mark. I think if the mark scheme was to be revised in a serious way, and more guidance provided to ensure uniform application of the mark scheme, passing would be less of a luck of the draw, and more of a reflection of ability.

I would add that the amount of marks given for discussing "no information" is quite remarkable in the 2018 paper. A quick search for the term "no information" reveals this. This would suggest there is more going on than a paper which was not received well, but also not set well...

I had not realised how many times "no information" came up. It is pointless to expect candidates to fill in the blanks on how the technology works. The exam is supposed to test your ability to analyse a subtle situation involving claims, an infringement and the prior art. If you're not given enough to work with, all subtlety is lost and it becomes a guessing game.

In the exam, my thought process genuinely was "I don't know how this thing works so it must be because I'm being stupid". I ended up wasting quite a long time trying to work out whether I was missing something or if there just wasn't enough information to make a decision one way or the other.

The examiners don't seem to think that this paper was particularly time-pressured, but a candidate might easily waste 10 or 15 minutes second-guessing themselves and re-reading the paper in a panic trying to work out whether a feature is absent or present - if you take 15 minutes for every occurrence of "no information" then you can easily see that candidates who are not confident with the technology will waste a lot of time trying to make a decision that cannot actually be made. This just ends up penalising candidates who are trying hard to avoid misunderstanding the technology.

@Meldrew Your comment suggests that the PEB independent from the the Examiners lowered the pass mark. This is worrying if true. If it was not the Examiners who set the lowered pass mark then who exactly was it who did the borderlining and assessed the minimum pass descriptor?

Regarding this idea that the problem was candidates not getting marks I don't follow your argument. To me it appears borderlining is something that should be happening every year - I don't see how the Examiners could possibly perfectly set a paper every year where 50% is always the cutoff between safe and not safe to practice. Benchmarking and normalising data are fairly standard procedures.

The issue is that variability between examiners in terms of the marks they are willing to reward is far greater than the difference in pass mark by "borderlining". This is why the exam is fundamentally unfair. Borderlining means a good candidate who was marked harshly may still fail, yet a poor candidate who was marked less harsh may just make it over the line. The marks do not represent ability, and therefore the exam is unable to distinguish between those who are fit to practice and those who are not. Consider your own trainees and whether the marks they received (regardless of whether they passed or not) represent their ability. My suspicions would be that they do not.

One common complaint with P6 is that understanding the invention is particularly difficult for those with a background in non-mechanical sciences such as biology or computer science.

In response to these complaints, CIPA recently released data breaking down the pass mark for P6 by subject area. CIPA (and others) cherry-picked from this data in order to argue that P6 is not biased by subject area. In particular, they note that the pass rate for Chemistry was high (46%) as compared to those with a background in mechanical engineering (29%), other engineering (33%) or physics (39%). If we avoid cherry-picking, and instead take a broader view, we see that the pass rate for those with a background in biology, computer science and electrical engineering was catastrophically low, 20%, 0% and 6% respectively. All of these candidates (and especially those with a background in biology and computer science) can be expected to have a weak mechanical background suggesting that the exam is indeed biased.

Furthermore, the strong pass marks for chemical candidates is difficult to interpret because many such candidates may in fact, have a strong mechanical background. Indeed, chemistry encompasses a wide range of candidates, from those working in the oil industry with a degree in Chemical Engineering, to those working on pharmaceuticals. Without knowing more details about the backgrounds of these candidates, it is therefore impossible to draw strong conclusions based on this data point alone.

Finally, given the small sample sizes in this data, I would caution anyone against reaching any conclusions based on this data, at least without performing careful statistical analyses. But an initial reading of the data suggests that P6 is indeed biased.

It is quite odd to accuse CIPA of cherry-picking by using the pass-rate for chemistry (n=46), when your counter to this is the low pass rates for biology (n=20), computer science (n=6), and electrical engineering (n=16 - and come on, it's an engineering discipline where mechanical inventions are often invoked!).

We also know that similar findings (non-mechanical disciplines are not at a serious disadvantage in exams using simple-mechanical subject-matter) was found for EQE candidates. This is why Papers A and B are no longer divided between different technical subject matter.

Is that not the point? One can cherry-pick from the data to support any possible position one would care to take. And I am glad you agree with my original post that given the small sample sizes, it is very difficult to draw any conclusions from the data! And that certainly it is not possible to support CIPA's conclusions that this data shows definitively that there is no bias!

And the question at stake here is specifically the UK exams, not the EQE's. It is quite possible that a significant bias exists in P6, because of the complexity of the mechanical inventions in the UK exams, but not in the EQE's with their focus on accessibility especially across languages. I would challenge anyone without a mechanical background to understand the "oil seals" paper in less than 5 hours!

Is there any evidence that the FD4 exam paper and the mark scheme actually distinguish between candidates who are safe to practice and those who are not based on a pass mark of 50%? It would be very surprising to me if there was any evidence whatsoever that each year, all candidates who are safe to practice score 50% or higher and all who are not safe score below 50%. On that basis, why is there not be "borderlining" every year?

It doesn’t really make sense to me that 25% of P6/FD4 is on a topic, inventive step, that can be examined in other papers.

Makes me wonder if the year on year increases to the mark allocation for inventive step are to make sure that newly-qualifying UK attorneys properly know how to run a UK inventive step argument.

If that's the case, then perhaps a solution is to remove the exemptions based on the EQEs for the amendment (and drafting) papers. This would ensure that all candidates are tested for UK inventive step in P4/FD3 allowing P6/FD4 to focus on topics that only it can test.

I believe the UK patent office also accepts the problem/solution approach so candidates can quite rightly choose to use the problem/solution approach if they wish. FD3 already allows candidates to respond to inventive step arguments using problem solution approach. I think FD4 should also make it clear to allow candidates to use problem/solution approach for inventive step.

I find it interesting that the P6 mark scheme often lays out the wrong steps or at least the wrong order of the steps of the windsurfer test whilst the Examiner's comments declare that candidates applied the test poorly.

JDD, who have vast experience in training candidates for finals exams, suggest to add another 30 mins to 1 hour to the exam. This seems not a bad temporary fix for the 2019 exam, since the exam and mark schemes are presumably already written for that paper. Thoughts?

To compensate for the various difficulties in technological word that may not be familiar to candidates from certain fields, I would suggest that PEB allows candidates to bring a dictionary (unannounced) into a P6/FD4 exam. I think this would be a step towards making the exam fair for all.

As a resitter whose firm won't pay for my resit, at what point will I be told what PEB are doing to ensure next year's paper doesn't have the same issues? I'm now a paying customer and would like to know what my £450 is going to get me... Because if it's more of the same I think I will give it a miss.

I wish I too could take the same approach, but despite the comments here- many firms do require trainees to pass UK exams and so I will be back again next year. I paid for the exam myself this year too and am galled at how flippantly PEB can dismiss a failing in the examination system and simply expect candidates to pay to show up again next year to rectify their mistakes. There seems to be no accountability. The Middlesex review made a number of suggestions on changes to be made to FD4- yet there has been information on whether a single thing has even been considered, yet alone implemented.

My firm is also capping the amount of times they are paying for these PEB exams. Is it because the PEB exams no longer have any value or is it because firms do not wish to support their candidates any longer after 1 or 2 times. There seems to be an unhealthy approach to training, development and career progression.

The quality of training at many firms and the cost of these exams just adds to the difficulty. The training is generally poor across the profession - particularly in regard to this exam - which leads to a frightened approach to FD4 rather than a positive, confident approach which you could use on the law paper, for example, if you had prepared properly. The cost adds pressure which does not help on the day. People are being held back in their career by this exam. Just today I see that a few more of my peers have been promoted to partnership whilst I languish needing to pass P6.

Something needs to be done - the statement released by email today is about as helpful and insightful as a smack in the mouth after a visit to the dentist - they need to look properly at this exam and the training across the profession rather than draft statements about how they followed their procedure and no issues came up. Testing the paper on people who have no pressure to pass is not a complete test of the paper's adequacy. If I sat the 2018 paper over the weekend without the pressure of the day then I dare say I could put together a decent answer.

I have received by email a statement from the PEB governance board on FD4. It has also been released on the PEB website (search "PEB governance board statement on 2018 FD4").

The statement details the marking and borderlining processes in further detail. In particular, the statement identifies that the difficulty of the technical content was not realised until the marks had been collated. This raises many questions about the effectiveness of the former steps in the chain. There is also a call on the statement to stop sending abusive comments to PEB.

Whilst I appreciate this statement, I am not sure it alleviates my fears that the 2019 FD4 will be any better than previous years, nor quell my fears about the interpretation of the mark scheme and variability in marks between examiners. If PEB want to do something useful for candidates, these are two of the places they could start...

Yes, it would be a good idea to increase the number of newly qualified people who test this exam- yet no indication that PEB are even considering a single change (the whole statement appears to suggest that absolutely nothing is wrong with the current processes and therefore there is no need to change anything for 2019).

I'm not sure whether that would achieve anything though - most newly qualified people (including myself) don't have any idea how we actually passed the exam, but are just thankful that we did.

If were we to test the exam, all we'd be able to say is "yes, that paper was as weird as the most recent ones"

And therein lies the problem - of all the PEB exams and EQEs I sat, FD4 is the only one I wouldn't put money on being able to pass again if I prepared in an identical manner. It's too much of a lottery.

Having lots of recently qualified people take future papers wouldn't directly fix the paper. However, if there was very little overlap between those people who pass one paper and those people who pass another paper it would show that the paper is not as fair as poorly written articles in the CIPA journal would have you believe.

Maybe its worth exploring industry roles or others if private practice firms don't seem to want to support candidates through exams. I'm not sure what's the value in having the PEB exams - is it used as a tool for holding back people's careers rather than enhancing their skills by allowing them to gain exposure and experience as a patent attorney. There is no difference between a qualified EPA and a qualified EPA, CPA. You can both do the same job very effectively. I would suggest the difference might be that those who do not require CPA in their work place is able to progress much more quickly than those who need CPA and therefore, increase in wages and career progression comes much quicker.

Perhaps take the pressure of trainees. They need to qualify as European Attorneys but private practice do not need to make CPA as a requirement. It can be a nice to have. Lets not take away opportunities from people who deserved to be promoted and progress their careers.

Go ahead and explore industry roles then... Just accept that you will not have the same earning potential as you could in private practice with CPA status. That is your free will - not a reason to criticise the CPA examination process which is robust and rightly only lets candidates pass when they are capable of real-world practice - unlike the EQEs which are considerably more academic.

If you look at the average earnings for trainees and attorneys 1+5 years experience, you get paid more in industry. Less stressful as there no such thing as billable hours and attorneys actually spend time to train their candidates. Also, they only need to be EQE qualified.

It is simply wrong to assume that you have more earning potential if you are CPA and EPA qualified. For example - getting selected as a partner require many things such as whether the potential partner gets on well with other partners as well as their ability to bring new business. There is a lot of business development skills which has nothing to do with PEB exams. So I would also question whether PEB exams are fit for purpose. It's a reasonable question to ask and it's up to PEB to convince their paying customers that the exams are modernised and have value and is rewarding (which also include personally rewarding for candidates).

"Go ahead and explore industry roles then... Just accept that you will not have the same earning potential as you could in private practice with CPA status" - I'm not sure where you got this from, but the annual salary surveys clearly show that up to partnership level industry pays significantly better than private practice. You also don't need any UK/EP qualifications to work in industry, so salary is much less reliant on exam passes. I know some experienced industry IP professionals who are earning more than most of the associates/FSPs I know in private practice.

The first rule when accused of any wrongdoing, as any lawyer will tell you, is never to admit your fault. An added advantage of this approach is that it may absolve you of actually having to do anything by way of rectification.

I sat FD4 for the first time in 2018 (the so-called "gantry-gate" year) and passed by a decent margin (55 with a pass mark of 47). I prepared by doing 4 past papers to time and reviewing them against the mark scheme. I have also previously passed paper C with a similar mark. In essence, I prepared well and passed the exam on my first attempt. I do not believe I passed out of luck but rather out of good preparation. I felt confident when leaving the FD4 exam and passed - surely a sign that the exam is fit for purpose?

Do you believe that the 66% of people who failed had inadequately prepared and were not fit to practice? Many people who pass all EQEs on the first attempt fail FD4 despite extensive preparation (in my case, more than 10 past papers and a training course and having achieved an average of around 70 on the EQEs first time).

Haha! Top notch trolling there. Of the over 100 comments here, count how many are in support of P6. That is very telling. Anyone who thinks they've passed (especially on the first attempt) purely because they were well-prepared and intelligent need to reassess their critical thinking. Hmm... I see the problem. Oh yes, passing P6 is a surefire indicator of one's ability to carry out I&V analysis!

The only way to force PEB to do anything meaningful is by STOPPING paying them by refusing to sit the exam. But I know this is difficult for many trainees to do due to their firms' insistence on passing it. Anyone care to post on here their firm's rationale for this insistence?

Seriously thinking that its NOT worth paying to do this exam (especially when I now have to pay for my own). They have been charging alot over the last few years but I don't see them taking on much of candidates feedback, who are by the way are their paying clients. I understand most of the money is generated from exam fees.

One suggestion is to actually give candidates enough time to finish the paper. No need to design a paper that cannot be completed in time. That's not much to ask for.

Again, the comments section are filled with people who have passed and simply refuse to accept that there are issues with P6, instead deciding the judge that people want a "free pass".

The profession is changing, and specialisms are important both for training and for actual practice. One only needs to look at the volume of specialised case law before the EPO to see how this plays a role in real life. Having practised for (and passed) this exam very recently, there is simply nothing about this exam that corresponds to real life practice. I will reiterate what I said in the previous thread, that the way that people are training for this exam is simply to pass the exam, and nothing to do with how someone would actually conduct an opinion on infringement and validity.

We have seen the same comments from the PEB and CIPA over the last few years, and nothing has changed. It's just sad.

An open question seems to be whether P6 tests relevant skills. I suggest an easy way to settle this would be to ask a someone who we all agree is an expert to sit the exam under exam conditions. I would suggest Mr Justice Arnold. If a specialist patent Judge cannot score >90%, then the mark scheme is incorrect. I will personally buy him two beers for this service (and I won't even insist on Spoons).

(I admit, I have suggested Justice Arnold in slightly bad faith. His latest judgement was 618 paragraphs, so he does not seem like a man who is willing to bash out a complete infringement and invalidity opinion in 5 hours. It would take me 5 hours to write out just one of his table of contents).

Possibly the P6 exam marking corresponds with the examiners' experience of real-life practice. The more real-life experience I had, the worse my P6 marks. My real-life cases generally involved far greater technical and legal complexity than P6 scenarios. By way of example, GB patent with 112 pages of letterpress description and 76 sheets of drawings, several german-language EP patents where, for one, the translation fled in legal proceedings was significantly different from that filed with the UKPO, and for another, the claims related to the invention of another family member as they had no support in the description. Despite my problems with P6, my employers were very happy with the standard of my work.

Thanks for sharing Ron. This is a very good example as to why PEB exams are not fit for purpose. It doesn't reflect real-life experiences or practices that many attorneys face. As another example, I often do alot of patent strategy searching but none of this is actually tested. In my real life experience with infringement and validity, we rely heavily on interacting with the client to discuss ambiguous terms of the claim language. Infringement and validity assessment takes a few weeks of work and not in my view, in 5 hours. Often, we don't actually advise one way or the other but present both sides of the arguments. We help the client understand the risks and advise as far as possible but ultimately, it is their business and up to them. Its up to the courts to decide the case if it gets that far. This type of work can only come with exposure to real life work and gaining valuable experience. It would be better to have a continual coursework/assessment where candidates would be required to gain certain skills/exposure or gain experiences in certain fields before being admitted into the profession.

I also don't think FD1 reflects the modern daily profession (only my opinion). In real life, you would never advise without checking the law book and even then some scenarios that comes up with FD1 feels very artificial. Client interaction is a two-way thing and also relies quite heavily on personal interactions, business & commercial understanding. You can only really get that from gaining work experience and these exams seem to be blocking candidates getting that vital experience they need to succeed in this profession. In case anyone is wondering - I pass both P2 and P6 (when it was 4 hours).

As a candidate looking to sit FD4 for the first time in 2019, I am genuinely worried about this paper and I know there are many issues with the paper. Do you think CIPA/PEB will resolve these issues before the 2019 paper. Or would it be better to give it a miss until CIPA and PEB have published their investigation or miss this until things have improved.

I would like to add that IPKAT has done a super job in allowing candidates and fellow professional to comment and raise their concerns. This type of blog is important to allow candidates to share their experiences. I find it much better than the formal surveys CIPA/PEB sends out as they are (fixed) rigid set questions and does not allow candidates to share their opinions and experiences like this blog for example. I would not have known that many others share some of my thoughts and concerns on P2/FD1 and P6/FD4 examinations.

I think its important to provide a platform to allow candidates to freely discuss and I hope IPKAT continues to release something like this every year. I think its good for the profession as it would hopefully help CIPA/PEB to consider changes or take actions more quickly if they want to.

Why does the PEB FD4 exam have such a low pass rate consistently every year? I think there is an issue. I also think that making an exam so difficult that only 30-35% of candidates pass from a pool of very highly skilled and intelligent candidates is not necessarily a good thing for the profession. Many of these highly skilled candidates end up applying their skills and knowledge in other professions as they decide to leave this profession. Indeed, many would still be doing IP work just not in this profession. They are very capable of doing some of the IP work and can undercut an attorney's hourly rate by doing the same job under a different title e.g. I've seen clients drafting very competent claims for a fraction of the price of an attorney. Be careful that we are not unintentionally creating a massive pool of skills and knowledgeable candidates that are not directly working in IP as patent attorneys. It would not be good for the profession as it may affect wages & future development. So I think something needs to be done because the profession is losing a lot of talent and I would predict that a large reason why trainees leave the profession is because of the exams and a perceived lack of change in the exam system.

The drop out rate is high and I am not surprised if the PEB exams are part of the reason why very talented individuals with significant skills and knowledge of the IP profession decides to leave. There comes a point where enough is enough I suppose and we are talking about highly skilled individuals who can walk into another rewarding profession quite easily.

I agree with the comments above that we do not want to lose so many bright and talented individuals based on these exams. I'm not saying to make the exams easier but to make them more relevant, more doable within the time frame, fairer exams and to actually make sure a decent amount of candidates who work hard/prepared well actually pass. 30% of candidates passing is very unsatisfactory and I do not believe the whole 70% of candidates are deemed not fit to practice. This is in view of the fact that the pass mark has dropped to 47% this year.

I do hope the PEB look at these many comments and actually acknowledge/take actions on these genuine concerns from candidates and current attorneys.

I think part of the problem is the expectation the firms place on candidates. As many have mentioned, some firms do not appear to be supporting candidates beyond 1 re-take when everybody knows how tough the exams are. Perhaps CIPA/PEB can get involve to work with firms as well as candidates.

Having said that, the PEB could do with looking into the exams papers themselves for example, is there sufficient time for candidates to finish. In P6, there are far too many topics to cover e.g. construction, infringement, novelty, inventive step, internal validity within the time given. In real life, you wouldn't do an inventive step analysis if you think your claim lacks novelty so the amount of materials candidates need to get through needs to be addressed.

I believe part of the issue is that candidates do not have sufficient GB work. Even when qualified, the amount of UK work is minimal and quite limited. Therefore, I truly understand why the PEB exams may seem irrelevant and have little value to many. As a result, many industrial roles do not expect UK qualification requirements. I would urge private practice to relax their insistence on gaining UK qualification(s) as it does not allow their employees to progress. I personally don't think UK qualification should be a condition for gaining promotion in private practice firms and that UK exams should be optional given how little UK work the profession carries out as a whole.

I would also like to see more improvements in the PEB exams especially FD4. I think its important for the PEB and CIPA to address the concerns of many e.g. amount of time given in exams, suitability of topics/subject matter in FD6 (and possibly FD1), syllabus, training and development in the profession. One area I find difficult with FD4 is that I do not know what the Examiners are looking for in an answer and what skills are being tested. Are we testing candidates the ability to speed write an answer? It is not clear what the objectives are for FD4 and the PEB could do with releasing excellent templates/guidelines/model statements & example answers on each section of FD6. What are they looking for in each section of FD4 and why. These are not vitriol comments but a way to provide candidates better understanding as to what they need to do in each section.

I don't think any of these exist (certainly the EPO provide much better training materials) and therefore, I won't be taking it this year until some of the concerns about FD4 are addressed.

The Chief Examiner's letter in the April issue of the CIPA journal made me laugh. Can you imagine what would happen if an allegedly experienced surgeon lopped off the wrong leg by mistake and said "Oops! I'm only human!"?

I do think the PEB board needs to rethink. The training is poor across the profession as a whole and attitudes need to change. Candidates have a right to be very upset with the current examination system. Its not working for a lot of people. The value of the UK exams in my opinion is diminishing. We no longer do much UK work as someone pointed out earlier. The exams needs to move with the times just like inventors needing to move with the time.

I won't be taking of the PEB exams this year until issues have been resolved and I am confident I am getting value (yes I now need to pay for myself). I learnt more preparing for my EQE exams. I have prepared really well in the past for the UK exams (8 months), with no luck. I have no idea what to expect with these UK exams, and it seems no one knows. That can't be right!

Experience in IP matters is only gained by actually doing the work i.e. meeting/interacting with clients, doing a real Infringement/validity assessment through your time in the profession. You can't gain these experiences through an examination paper. A re-think please - the system ain't working.

Personally, I find the Chief Examiner's comments in the CIPA journal quite upsetting. I had to wait for a while before I posted this. Yes, some of the comments directed at some Examiners are out of order but there was no attempt from the Chief Examiner to explain the outcry of unfairness and the PEB exams being "not fit for purpose". There was barely any acknowledgement of the hard work and the disappointment many candidates feel after their results.

Last year, I was the top billing trainee but because I didn't pass an exam I only got a small raise. My colleague did pass their exams and got a significant pay increase. I'm not complaining about the worthiness of pay raises but I'm saying that my wages are being directly affected by the examination system PEB created even though I was deemed competent enough to carry out the work and hit my billing targets. I am now considering whether to stay in this profession as I don't want to be held back because of a few exams. I hope that others in more senior position could sympathise with many trainees in this position a bit more. We have heard from the Chief Examiner views, which I respect and its entirely her opinion, but CIPA readers should also be able to read the experiences from a candidate's perspective too.

I find the PEB to be very inflexible in their approaches/ideas and the (unnecessarily) difficult level of the examination are not justified. The PEB keeps referring to the fact that exams must be extremely difficult because someone could open a patent attorney firm when they pass. In real life, this theory is so far-stretched and unreal. No-one will do this. Other professions such as accountants or lawyers, builders etc... now have many routes to qualification (which are all equally respected) and they are flexible in their approaches to fit all sort of candidates. On the other hand, I find that the PEB exams just alienate many in the profession and confidence in the system is very low. Some candidates have children to care for, some are slightly younger, some learn at different paces etc... One set of exams do not fit all, and these exams are no longer sufficient for alot of candidates. The PEB needs to be flexible in how they test the new generation. Foundations are great because of the variety on offer. Why can't it be the same for the advanced papers.

PEB has always used this theory of somebody opening a patent firm right away after passing exams. I see a very simple solution for this. Make it a requirement that you must have 5+ years experience post qualification before being allowed to open a firm in your name. This is a much better system than exams that are not fit for purpose. The current system is far too dependent on exam results and negatively impact on people's careers, wages and job opportunities.

Absolutely horrified at the extortionate high exams fees PEB is charging this year especially given the difficulties of the paper this year. People say that university fees are expensive but at least you get lectures, materials and tutorials out of it. The PEB is charging alot and offering minimal in return.

All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.