Its a bit hard for other people to give you ideas without a bit more of a hint at what you want to do.

What sort of Artificial Intelligence are you interested in?

Localization seems to be a popular subject and quite a few people have done Monte Carlo Localization projects. A harder project would be to do Simultaneous Localization and Mapping (SLAM), but that is quite a challenge on the NXT.

Puzzle solving robots are quite popular. Rubik cube solving ones have been done several times, and there is a recent example of one that solves Soduku. There are other puzzlues you could attempt.

Game playing robots, e.g one that can play chess or checkers are possibilities. Again, there are lots of other possible games.

Many complex projects need components running on the PC and the NXT and communication betweem them.

Computer vision and speech control projects are possible. Vision needs a wireless camera or a canera such as the Mindsensors NXTCam.

There are possible projects in machine learning. For example, simulating learning in insects or other simple animals.

I have not seen much done with robot arms and kinematics. That is an interesting area.

Walking robots with a realistic walking action is another possibility.

Balancing robots are popular. You can use multiple senors and probabilistic robotics techniques such as Kalman filters for these.

You could look at prototyping types of robots that might be be commercial successes in the near future, like a better vacuum cleaning robot, or a grass cutting robot. (You probably need to teach that one the three laws of robotics).

I was thinking along the lines of robots mimicking human hands, legs etc and/or the ability to learn from human movements and replicate them like emotion, facial expression, teaching the robot hand gestures etc.

Do you only have access to one NXT. I am finishing off my thesis in a few weeks time, I worked on building a balancing robot, implementing a behavioural system and Master/Slave control of two robots. Towards the end of my project I started looking into joint decision making of two robots. So basically the robots share sensor information between the two robots, and one of them will make a decision based on the information. This leads the way to explore group dynamics of robots, each having their own specific role within a group...i.e. A robot with a compass sensor could be the navigator of the group, ultrasonic sensor or IRSeeker could be the scout.

You would probably need at least two NXTs to explore this effectively though, ideally 3 even.

I was thinking along the lines of robots mimicking human hands, legs etc and/or the ability to learn from human movements and replicate them like emotion, facial expression, teaching the robot hand gestures etc.

Some of these are quite challenging. Brian Bagnall's book (Maximum LEGO NXT: Building Robots with Java Brains) has a chapter on Hands and Exoskeletons. It uses a data glove to capture gestures. You could certainly use this for hand gestures.

You could build a robot face. It needs quite a few motors - possibly servo motors - to show any realistic facial expressions. Capturing facial expressions is quite tricky however. I don't know if there is any computer vision software that could be run on a PC to do this. If there was, you could then send commands to the NXT face robot to replay the facial expression.

To capture leg movements, you would probably have to use wireless sensors, unless you used sensors like rotation sensors, the tachometer on a NXT motor, gyro sensors or acceleration sensors, with long cables to capture leg movements.

I have just seen an article on the Gadget Show on Channel Five in the UK that has shoes with a variety of sensors controlling an MP3 player. I think you can watch this on the Internet. You could do a NXT version of this. You would have a NXT mounted on a shoe with a variety of sensors, such as a touch sensor and an acceleration (tilt) sensor. You could use this to control anything over bluetooth with your foot.