I have a .sql file that contains 1 million insert statements for a table. My table have 5 fields 4 of them are varchar2 and one is number that is primary key.
I am execution this file from sqlplus in window by below command

@filename.sql

inserting is going on but it is taking much time.

So my concern is that is there any way that i can insert these records in batch mean in lot of 1000 records so that elapsed time will be reduced.

If you have 1000000 inserts then you have 1000000 inserts.
But maybe you have 1 insert statement that inserts 1000000 rows?
Anyway you can't reduce elapsed time batching. After all, you have to insert all rows.
It mostly depends on concurrent treatments.

UNDO_RETENTION specifies for how many seconds undo information is kept. The default is 900 seconds and you can set this parameter to guarantee that Oracle keeps undo logs for extended periods of time. By example this parameter specify how long undo information should be retained after commit, preventing "snapshot too old" errors on long running queries in addition to supporting Oracle flashback query.This could be the reason for your slow performance with multiple inserts.
Revert it back to normal after your insert operation.