Yes, you can. You just need to copy data in log.dir on disk to the newmachine and keep the broker.id in broker config the same. No need to changeanything in ZK since broker will re-register on startup. The main purposeof broker.id is to allow people to move data logically from 1 broker toanother.

As Jun described, the purpose of broker.id is to be able to move data fromone broker to the other without changes. I believe this should work in 0.8as well. However, we've never tried it so not sure if there are bugs. Letus know how it goes.

I've started by only coping $log.dir from server A to server B. Bothserver A and server B ran same version of kafka 0.8 with sameconfiguration files.

However, after running kafka 0.8 on server B I get the followingexception when I tried to fetch the message:2013-02-28 05:56:35,851] WARN [KafkaApi-1] Error while responding tooffset request (kafka.server.KafkaApis)kafka.common.UnknownTopicOrPartitionException: Topic topic_generalpartition 0 doesn't exist on 1 at kafka.server.ReplicaManager.getLeaderReplicaIfLocal(ReplicaManager.scala:163)........

However, the folder topic_general-0 exists and there are files00000000000000000000.log and 00000000000000000000.index there . Thereare also a replication-offset-checkpoint file in this $log.dir folder.I then copied by $log.dir and also the zookeeper folder from server Ato server B and run it. In the zookeeper folder I have the followingfiles:-rw-r--r--. 1 root root 296 Feb 28 06:12 snapshot.0-rw-r--r--. 1 root root 67108880 Feb 28 06:12 log.1-rw-r--r--. 1 root root 67108880 Feb 28 06:12 log.4b-rw-r--r--. 1 root root 4817 Feb 28 06:12 snapshot.4a

I actually tried to load the data back with the same instance of kafkaon server A so the broker id must be the same. The reason I broughtthis up at the first place is because we've had some issuesrecognizing the messages on a server stop/restart. I was able toreproduce our issue with following steps:

Notice that kafka-server-stop.sh uses kill -SIGTERM andzookeeper-server-start.sh uses kill -SIGINT. My observation is that onour server kill -SIGINT doesn't actually kill the zookeeper process.(I can still see that running when I check the processes).

Then when we tried to fetch the messages from existing topics andpartitions, we get the following error:WARN [KafkaApi-1] Error while responding to offset request(kafka.server.KafkaApis)kafka.common.UnknownTopicOrPartitionException: Topic topic_generalpartition 0 doesn't exist on 1at kafka.server.ReplicaManager.getLeaderReplicaIfLocal(ReplicaManager.scala:163)

I am not sure if anyone has experienced this before. It appears to methat because kill -SIGINT didn't actually kill the previous zookeeperprocess, running from that state messes up the partition/topicinformation with zookeeper? And maybe because of that, copying the logfiles and trying to reload them won't work (because somehowinformation were corrupted)?

Sure I can collect the logs. However, the strange thing in my case isthat the zookeeper-server-stop.sh script (kill -SIGINT) didn'tactually kill the zookeeper process in my server. When you triedshutting down zookeeper in your step, did you double check to see ifthe zookeeper process had been killed or not? (ps aux | grep"zookeeper")