I know it's a bit late for this conversion, so this suggestion may be a little mute... but maybe next time (or anyone yet to do it) could pay a trip to the local wrecker / machine shop and pick an old crankshaft with spun bearings.

Cut off the crank flange end and machine a motor shaft sized hole down the centre, that way you could use a pre made matching aluminium flywheel (should be lots cheaper than custom machining) and bolt directly to the crank flange end.

A life time ago back when we were mucking around with Holden 6 cylinders (early 80's) a mate of mine performed a 235ci crank conversion on his turbo 186 Monaro.

It wasn't overly popular (at the time) but people would take a 221 crank out of a Ford motor and join a Holden rear end on it to make 235ci.

The process was fairly simple. Grind the Ford crank journals down to suit Holden bearings, cut the rear flange off after the last main bearing and join the Holden flange on.

They would use three dowels for location and three bolts taped in from the end for join, then the crank would be welded.

Since you already have the motor shaft poking out it would be a lot easier to drill the centre of the crank flange and use taper locks or the keyway to secure the flange.

Hope that makes sense...

Last edited by EV2Go on Sat, 11 Dec 2010, 06:45, edited 1 time in total.

We decided it would be fun to do the controller commissioning tests, a lot of which are run at 24 V, so we don't need a lot of BMUs or cells bolted into cages.

So we found our set of 16 cells that had been running for a while on a solar inverter. The BMU software on these failed some time ago (some of the LEDs stopped flashing), and we never got around to figuring out what went wrong. Perhaps we should have.

I bolted on a set of BMUs to eight cells. I measured all 8 cells; they were matched to the millivolt. While I was waiting for something I gave them a quick blast of charge from my non-isolated charger. We hooked up the JTAG programmer to the first cell, and I wondered if I smelled something burning. Well, Weber had just bought a whole lot of new shelving recently, and the MDF in those shelves smells all the time, so we dismissed it. Things weren't working well though, and it suddenly dawned on me... oh no! Non-isolated charger! Maybe this burning smell was real after all!

Damn! So all the boards came off (quite a lot of wrench work). As soon as it was all done, I realised that it can't be too bad, since I had seen some memory locations, so the JTAG was at least partly working. Then I also realised that we were working with the negative end of the chain, and the charger has its negative output connected to ground... so nothing bad should have happened.

Damn! But then Weber realised that there were no wire wrap wires under these boards; we had been using special versions of the software that didn't need the new board mods. But the latest software would require the mods. So the boards had to come off anyway!

So the problems we were having was probably because those boards were unmodified. We had another string of 8 cells ready to go, but they were the wrong isomer (negative and positive were mirror imaged to what we needed). Fortunately, the other 8 cells were the other isomer, so they were ready to go. So we put the new boards on the other set of 8 cells all strapped together. Note that I hadn't checked these 8 for balance.

When these were all bolted on (third set of wretched wrenching), Weber noticed that the first two cells were bypassing. Eek! I decided that this had to be because of hardware problems that needed access to the underside of the board, so these boards were unbolted (4th set of WW). I found and fixed a problem with the outside board, and the next one seemed to fix itself. Hmmm. Well, while the boards were off, I thought I'd check the software. Oh dear, version 4 data format; we use version 5 these days. So I individually flashed the Bootstrap Loader Writer (BSL writer) to each of the boards. That all seemed good, so we wrenched these boards on (5th WW).

To test things, I connected the comms (making sure that the unisolated charger was not connected) and found that the comms echoed only to the third board. I got out the multimeter and started poking about.

Huh? I'm only seeing 0.86 volts at the JTAG connector. For that matter, I'm only seeing that at the PCB near the cell terminals. Oh dear, I'm also only seeing that voltage from bolt to bolt... looks like this cell got discharged somehow. Most likely, whatever went wrong to stop the chain of comms all those months ago, caused the software to go crazy and drained that cell. I quickly checked all the others; they were all good and within about 2 mV of each other. Phew.

We disconnected the comms wires and put 6A into the string of cells. The bad cell came up to 1V in a matter of seconds, and was at about 2.3 V after a few minutes (falling back to 2.1 V when the charger was disconnected). We soon had it up to about 2.6 V, falling back to about 2.5 V. I thought that might enough to work with to re-flash the BMU with the bad cell under it, but it didn't seem to want to play ball. I noticed that the cell voltage went up slightly (a millivolt every few seconds) when the JTAG was connected, so the cell must be getting charged through the JTAG circuit. Possibly it wasn't impressed with its secondary duty as a battery charger.

We decided to replace that cell and deal with it elsewhere. That meant unwrenching the threaded rod holding the clamps in place, unbolting that one cell, and reversing all that. (WW #6?).

Well, at that point, we got the dreaded "all FFs in the calibrations" syndrome; that meant we had to find the calibration values for the main clock, and later we'll have to recalibrate all the voltages and temperature. Sigh. Some BMUs seem way more sensitive to others with respect to glitches when the JTAG connector is removed; some just seem to go crazy whenever the JTAG is removed, and they tend to change at least some of the flash values (either program or calibration data or both). So we had to redo frequency calibration for another board when Weber had to attend to family needs.

I took the bad cell home as I have a power supply (only 3 A unfortunately) with adjustable voltage and current limits. It's been on charge at around 3 A (I can't say for sure what the average current was, since the power supply gets hot and shuts itself down a lot, especially as today when I had papers clogging up the airflow). It's up to 3.17 V now, and slowly rising. So far, it seems to be behaving like the other cells, except that none of the other cells have ever been below 2 volts, or even 2.5 V I think.

Maybe it will come back with nearly all its capacity; somehow I doubt it (especially since seeing this video from this thread). I guess I'll know in a day or two.

Last edited by coulomb on Tue, 14 Dec 2010, 14:10, edited 1 time in total.

Addendum: as I was leaving, Weber pointed out that really we need to sort out the mounting of the resolver to the end of the motor ND-end shaft, before we get too carried away with testing the controller. Well, we know the controller works, but seeing the end of the gearbox turning under electric power would be a great morale boost for us.

So maybe that is really the highest priority at present. It's a pity, since fixing it requires that we take out the rotor, which means undoing most of last week's progress. But we know the drivetrain goes together now, and the motor mounting bracket is made (at least until such time as we push 350 Nm through the motor and that torque multiplied by the gearbox twists the bracket into a pretzel ).

The cell is now fully charged, and took something like the expected 40 Ah of charge. It's difficult to be precise, since my slack power supply has a very rounded voltage-current characteristic. So it spent most of today under 3.4 V, but less than 2.0 A, even though the current limit is set to 3.0 A. When I had the voltage limit set to 3.60 V, it was even worse; the current was under 1.8 A when the voltage was still well under 3.40 V.

Over an hour after charging, its cell voltage is still over 3.60; most of the cells I've looked at drop below 3.5 V almost immediately the charge is turned off (from memory). So that seems good.

It was charged to 3.644 V, according to my multimeter. It probably sat there drawing negligible current for several hours while I was at the AEVA meeting.

Edit: next morning, at 3.560 V.

Last edited by coulomb on Thu, 16 Dec 2010, 02:43, edited 1 time in total.

We spun the MX-5 motor electrically today! Our first time with the WaveSculptr drive.

The "regen slowdown" is rather abrupt; there is almost no rotational inertia, so we barely see a flash of negative bus current before the motor speed goes to zero. We were driving the controller from a GUI app provided by Tritium. The PC connects to the CAN bus via the Ethernet to CAN bridge. All that software just worked first go. We had slider controls for motor current, bus current, and target motor speed.

The encoder was not properly mounted; blue tack and electrical tape were involved. The "pack" is just 24 V, as suggested by the Tritium user manual for testing. We used two EV200 contactors, one with a 47R 50 W precharge resistor across it. We also had a 63 A breaker. Can't be too careful, even at "only" 24 V. We limited the speed to 530 RPM.

It's great to reach this milestone, but we realise how much work still remains. We have the whole battery pack to finish; no cells are in racks yet. We have to get the charger going, Driver Controls, cooling system, balance the motor. However, the potbox is in, the drivetrain is pretty much done, and the bigger differential is in. We have no hydraulic brakes yet (detail!).

Last edited by weber on Sun, 02 Jan 2011, 13:14, edited 1 time in total.

One of the fathers of MeXy the electric MX-5, along with Coulomb and Newton (Jeff Owen).

Weber and I have been considering our options for controlling the charger from Tritium's Driver Controls. We've figured out a reasonable way to get two RS485 ports (both with TX and RX) to control two BMU strings.

The obvious way to control the CAN-enabled charger is through the provided CAN bus interface. But the problem is that the charger won't play nicely with the controller on the same bus; we're told that anything more than about 2 packets per second will confuse the charger. It needs its own CAN bus to itself. That's a real shame, but it's not something we can change.

So we have two main options. One is to add a second CAN port to the Driver Controls box; we can essentially duplicate the existing CAN circuit with a MCP2515 and transceiver chip; about $10 in parts, and it could be mounted on a small daughter board inside the Driver Controls box. We can connect the second CAN port (currently paralleled with the first) to this second CAN port. Usually the second CAN port would be used for the terminating resistor; we would just put a permanent 120R resistor in the Driver Controls instead. If we need to daisy chain something else on the CAN bus, we could still do it at the controller end.

The other option is to essentially duplicate the opto isolation circuitry inside the CAN interface circuit (which I've already traced and published here. It's ironic that these opto isolators don't actually provide any isolation; the other end is referenced to ground from the charger's control circuitry, which connects to the negative output of the charger. The isolation of the CAN bus from the charger control circuitry is on the electronics under the board I've explored; I don't particularly want to reverse engineer that to find out how good the isolation is. That's a reason to not use the provided CAN interface; we really can't be confident that it will be able to provide 820 V of isolation. We'd need a third RS232 port (input and output), but we think we can do that with software we've already debugged.

So duplicating the opto isolation circuitry seems to be our best option at present.

I had the idea of using the existing CAN interface circuitry to do the isolation for us; we don't want to have 820 V circuitry running around inside Driver Controls. So we need an external box; why not use the box provided? Well, there is the problem that the optos aren't isolated. We could cut tracks so that it was isolated, but the clearances are so small that I would not be confident of sufficient isolation.

So we'd likely have to design our own printed circuit board, and make sure that the isolation is good for 820 V.

A high voltage pack, at least one over 400 V, does raise some issues.

[ Edit: when charging two half-packs (with one charger or two), the packs would not be in series, so each charger only has to isolate 410 V, which it presumably will be able to do. So the isolation concerns are not quite so strong, but still to be considered. ]

Edit: fixed a link to CAN charger circuit

Last edited by coulomb on Wed, 06 Apr 2011, 09:39, edited 1 time in total.

Do you want to have a CAN bus comm's link to charger station outside the vehicle.

If the TC charger serial port protocol was public you could easily make up you own interface.
This may happen in the future, so having your own MX5-evCAN that you know and manage and just link via a CAN bridge module would be better even though its having to make your own CAN module is a tough job, especially diagnosing comm's faults.

DIY is much more suited to duplex RX and TX channels.
CAN is very fast and is shared TX/RX on same pair. Like USB.

Do you want to have a CAN bus comm's link to charger station outside the vehicle.

No, we just want to start the charger, and cut it back to just under 1 A when the first cell overvoltages, and turn it off. Charge progress (e.g. present charge current and voltage) would be a bonus.

If the TC charger serial port protocol was public you could easily make up you own interface.

I'm confident I can talk to the charger serially now, since I got CAN packets back from the charger. I was a bit thick headed about how it works.

This may happen in the future, so having your own MX5-evCAN that you know and manage and just link via a CAN bridge module would be better even though its having to make your own CAN module is a tough job, especially diagnosing comm's faults.

We could always put the CAN interface back on to talk CAN to the charger. But the charger as part of an MX5-evCAN busisn't going to work, because of the 2 packet per second limitation with the charger.

I'm sure that one day they will realise the folly of this, and change it so that the RS232 bottleneck only transmits packets with the charger's address (so there can be plenty of other CAN traffic, just limited to two charger specific packets per second). Until then, we're stuck with a private bus for the charger.

DIY is much more suited to duplex RX and TX channels.

Yes. You can even make out data bits on a CRO, sometimes. That's a bit harder with CAN.

Are you interested in openCAN development?

Not right now. Maybe later.

What other CAN modules do you want to interface to?
Console display?

At present, only the controller is planned on the CAN bus. Though I was wondering how we are going to get the Driver Controls master BMS software to talk to a display, and CAN might be an option. We're already connecting 3 full duplex RS232/485 ports to the DC chip; more might start to strain things a bit. Maybe we can sacrifice one of the gauge ports for a transmit-only RS232 port.

Does the MX-5 use any CAN on the Driver Speedo and Dash indicators?

I'm 90% sure that this circa 1990 MX-5 is pre-CAN. I was hoping that the ECU would have a nice CAN bus already connected, so we could use the place under the passenger's feet where the ECU came from for Driver Controls. But it likely doesn't have any CAN bus (the MX-5 is at the other laboratory at present).

If you can share the protocol info that goes between the TC-Charger CAN box and the TC charger would be very helpful to other interested in using serial with their Elcon / TC Charger Charger.

Their little CAN box is a worry with no isolation but using a OPTO devices is deceiving if you don't measure the track work.

Thanks for posting the circuit info you nutted out.

As the analogue mods have also become public on forums, it makes the chargers more attractive to DIY ev'ers.

It looks like your very close to making a CAN bus enabled MCP2515 uCro board. Perhaps using a development board with proven CAN interface would be a good start. But with out special CAN Digital storage SCOPE it may be a battle to get it going on your own.

I've been tempted to try it. But at home I don't have the test gear.

Once you have mastered a PCB with can you may never want to use RS232 or 485 again.

CAN bus protocols are not plug and play. Every manufacture can do their own. I was thinking its rare to find auto gear from different suppliers that can be swapped.

Hats off to you guys. Hope you get the wheels spinning soon.

Ken
7C

[edit - added email notification]

Last edited by 7circle on Fri, 07 Jan 2011, 05:19, edited 1 time in total.

7circle wrote: If you can share the protocol info that goes between the TC-Charger CAN box and the TC charger would be very helpful to other interested in using serial with their Elcon / TC Charger Charger.

Well, it's hard to be sure yet since I can't get the charger to respond sensibly without a battery to charge.

This is what I believe:

The charger sends these 12 bytes once every second: 18 FF 50 E5 VVVV CCCC SS 00 00 00
where VVVV is the voltage in tenths of a volt, MSB first, e.g. 03 F0 = 100.8 V,
CCCC is the current in tenths of an amp, e.g. 00 0A = 1.0 A,
SS is the status, as per the Lithiumate page, e.g. $10 = comms error (no can packet in 10 seconds).

Initially, this comes through as 18 FF 50 E5 00 0A 00 00 10 00 00 00
meaning as far as I can tell "I'm sending 1 packet per second 8 bytes some code = $3F another code = $50 [ edit3: these could just be part of the charger's extended address ]my ID is E5 present voltage is 1.0 present current is 0.0 status is comms error".

I believe all I need to send about once per second is 18 06 E5 F4 03 F0 00 0A 0X 00 00 00
which says "I'm the Lithiumate BMS that you are expecting, please dial up 100.8 V (hence 03 F0) @ 1.0 A (hence 00 0A) and turn yourself on". X = 0 for on, 1 to power off the charger.

These changes are to allow extended CAN addressing, as assumed by the Elcon charger (now TCcharger I believe).

[ Edit 8/Sep/2012: the above changes are undone in our code now, because we now talk to the charger with RS232, not CAN. We've changed can.address to can.identifier, to be more standard. But these changes may be useful to someone else, or to us again if we end up using extended IDs for some new CAN device. ]

Last edited by coulomb on Sat, 08 Sep 2012, 07:41, edited 1 time in total.

The "mystery" of the charger not responding to my RS232 "CAN" packets is solved. Early on, I had the idea that the interface between the charger and the can interface would be more-or-less RS232, so that +- 6 V signals would be used and allowable. As the subsequent circuit tracing revealed, this is not the case. So early on, some of my RS232 level connections to the charger would have blown the diode on a level shifting opto-coupler on the charger's control board. I became so sure of this that I opened the charger and checked. Near where the 7-pin connector comes in, there are two rectangular blobs (under the black coating), which turned out to be opto-couplers:

Sure enough, the lower of these two connected to pins 3 and 6 of the 7-pin connector, and there was no diode conductivity between them. I clipped off the damaged coupler, and soldered in a coupler liberated from an old BMU prototype. I verified that putting current through the opto's diode caused the voltage from the collector to emitter to change substantially, covered the new opto with natural cure silicone, and reassembled the charger.

Today we verified that sending the packets as detailed in the above post changes the charger's error signal from "comms error" to "no battery". Also, the messages sent by the charger don't have the comms error bit set, when suitable packets are sent to the charger through the repaired control board.

Emboldened by this success, we connected a 24 V block of cells to the charger (via a circuit breaker, in case the charger tried to send 410 V to the cells), and switched on the charger. The packets we were sending requested 28.8 V @ 2.0 A. Alas, the charger still responded with the "missing battery" signal. I guess that 26.5 V isn't enough to keep the charger happy. It's a pity; it could be nice to have a programmable charger capable of 5.5 A at anything from say 12 V to 410 V. Oh well; I guess we need to string together a few more cells.

At least we know that the charger is OK now. I was impressed by the build quality; it's certainly at least several cuts above the cheapest junk that can be found out there.

[ Edit: Pleasingly, the voltage in the packets from the charger changed to reflect the voltage of the battery, within about a half a volt. Half a volt may seem like a large error compared to 26.5 volts, but of course it's about an eighth of a percent of 410 V. ]

Last edited by coulomb on Tue, 25 Jan 2011, 16:18, edited 1 time in total.

BTW, the 5-pin connector visible at the top of the control board (just to the left of where the 7-pin connector wires terminate) is very likely where the charger's firmware is updated from. There is a warning decal that covers two holes in the case; these two holes allow access to that connector and to the push-button.

One day it would be nice to figure out what processor they use, how the various "algorithms" are stored there, and be able to change the control software. I think that many people get caught out when changing their pack voltage slightly; you need to change the charger as well (and of course the rest of the system needs to be able to cope with the changes too, e.g. controller, perhaps a heater, and so on).

There are a few places in the world where chargers can be reprogrammed, but as far as I know there aren't any in Australia. It would be nice to be able to offer the service for a modest fee, at some time in the future (after the MX-5 is running, for example!).

Yes, I was thinking that, though the ones I'm familiar with are 4, 8, or 14 pins (MSP430).

I'm not wanting to discover industrial secrets or anything like that. I'm happy for Elcon (whatever they are called now) to sell the chargers, and even update the algorithms, if they can do it locally. Shipping a charger once from China is bad enough; shipping it back to China and back here again to adjust the voltage threshold is too much freight cost, delay, inconvenience, and risk in my view.

My thinking is that the processor under the JTAG connector (if that's what it is) is "just" for parameters, like algorithm, Ah capacity, that sort of thing. The charge algorithm proper is possibly handled by another processor, possibly to the left of the three mini-DIP opto couplers near the middle of the picture.

If so, that makes it simpler, and less risky. It was possibly even designed for the more technically inclined customers to DIY.

coulomb wrote: BTW, the 5-pin connector visible at the top of the control board (just to the left of where the 7-pin connector wires terminate) is very likely where the charger's firmware is updated from. There is a warning decal that covers two holes in the case; these two holes allow access to that connector and to the push-button.

I've just realised that this push-button is how algorithms are selected from a set of up to 10 pre-installed algorithms with the non-can version of the charger. This button will likely do nothing on this can-enabled model, and likely has nothing to do with flashing a new algorithm.

From the manual:

2. To choose another curve, please cut off the power supply first, then uncover the label, pressing the button while connecting the power. If you want to choose curve 3, release the button after the 3rd LED Flash. Now the selected curve (e.g. curve 3) will be recorded in memory. If you want the charger to work with the selected curve (e.g. curve 3), cut off the power and reconnect it.

coulomb wrote: I was impressed by the build quality; it's certainly at least several cuts above the cheapest junk that can be found out there.

Glad to hear this, as I bought mine primarily on the premise that you guys had bought one. While this is not the way I prefer to do things, I figure it is better to follow those that knows something, than for someone who doesn't have a clue to take a totally different guess.

(Note: you may have to be subscribed to DIYelectriccar to access their attachements).

These algorithms don't apply to our charger, since ours is a CAN model; we have to tell the charger what voltage and current over the CAN bus (or through the serial port, as we will be doing, but the charger can't tell the difference). However, presumably, the same voltage limits would apply, or close to it. Note that V1 in all the algorithms is 2.0 VPC. Any less than that, and it's not even ready for stage one.

But the next question is how many cells does our nominal 312 V charger expect? 132 is evenly divisible by 3.0 and 3.25, but not 3.2 or 3.3. We suspected 3.0 at first, since it lines up so nicely with lead acid, and there are lead acid versions of this charger. But that would mean 312/3 = 104 cells, so the maximum average voltage per cell would be 416 V (the charger's limit) divided by 104 = 4.0 VPC. But algorithm 606 requires 4.10 VPC, and some of the 300 series algorithms require up to 4.30 VPC. So we believe that they are using 3.25 V as the nominal LiFePO4 voltage, so that's 312/3.25 = 96 cells. That allows for 416/96 = 4.33 VPC, which fits nicely. So the likely minimum pack voltage is 96 * 2.0 = 192 V.

That means that we should be able to charge all the cells in our largest box (60 cells, could fit as many as 64 later) as long as the average cell voltage is at least 192/60 = 3.20 VPC. So that box should do nicely for testing the charger.

From the "When BMS go bad" post, talking about a BMU that discharged a cell, potentially damaging it:

coulomb wrote: ... looks like this cell got discharged somehow. Most likely, whatever went wrong to stop the chain of comms all those months ago, caused the software to go crazy and drained that cell.

We hope to fix this by inserting a diode or two (using a dual diode, with one diode shorted for now) in series with the gate of the bypass transistor. Yet another component on the digital board. But I think an analogue board would have needed the same.

The idea is to make the BMU "over-discharge fail-safe". If the software goes crazy and discharges the cell, when the cell reaches a certain voltage (perhaps around 1.8 Volts Per Cell = VPC [ Edit: was 2.6; the processor runs at 2.5 V so nothing will change above 2.5 VPC), the diode drop causes the voltage applied to the MOSFET to drop below its gate threshold voltage. At that point, either the transistor will turn off, or more likely it will go into linear mode and get very hot. We'll need to do some experiments on the actual board to see what will happen. If the transistor fails shorted, then we've achieved nothing except melting the transistor. However, if the transistor survives the high temperature, or fails open circuit, it should eventually turn off, and save the cell from draining below a certain voltage.

It could be a delicate balancing act. We want to make sure that at 3.6+ VPC, the transistor turns on hard and reliably, but turns off somewhere above 2.0 VPC. The gate threshold on the transistors we're using is specified as 0.45 to 0.85 V @ 25C. (Datasheet link is broken; data here: http://www.alldatasheet.com/datasheet-p ... 12BDS.html.) The datasheet doesn't say what happens to the threshold voltage as the temperature increases. [Edit: actually, the second graph indicates that the threshold decreases with increasing temperature. ]

The other transistor we've been using has a threshold of 0.5 to 1.5 V (a rather wide range), but the minimum threshold voltage is given as 0.35 V at 150C, suggesting that the threshold will decrease with temperature. That means that as soon as the transistor starts increasing its resistance due to the threshold being reached, it will heat up, and the threshold will decrease, so the transistor will keep conducting till the temperature gets very high. Eventually, even the lowest threshold will not be met, so if it survives the transistor will eventually let go when the cell voltage goes low enough, but in the meantime, it could get very hot and fail shorted.

The diode is next to the transistor, and it will likely have a negative temperature coefficient, so it won't help either. [Edit: oops, it's next to the FAN transistor, not the bypass transistor; the two are effectively thermally isolated. So that's something. ]

It would be nice to find a simple, foolproof way to make the BMUs discharge fail-safe.

Edit: introduce the subject.

Edit 2: Weber pointed out in email that the transistor can likely handle the ~ 280 mW of power; the bigger challenge will be to have the circuit work reliably at 2.5 VPC, and disconnect somewhere around 2.0 VPC, certainly above 1.0 VPC, with a 0.3 (?) to 0.85 V threshold variation.

Edit 3: as long as we never use the high threshold device (1.5 V), we obviously need two diodes. Three might even be better.

Edit 4: fixed a link

Last edited by coulomb on Wed, 06 Apr 2011, 09:41, edited 1 time in total.