If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

I bet he does what any half-smart programmer would do which is put all of the using std directives into a header file and then before he writes his programs, he just types in #include "directives.h" and viola he has everything he wanted (if you read my tutorial about the preprocessor you would know how to do this )

Nope, never used that - although it would be the most sensible way to go about it if I was going to skip the namespaces. The thing is, I'm not anti-using namespace because it requires extra lines etc., but because it defies the whole point of having namespaces in the first place. I use std::cout all the time and it always compiles and works as I expect.

Originally posted here by White_Eskimo
Yup pwarning you are right there...also you could do something by calling the flush() function in the iostream library that will flush the buffer for you...an example of this in code would look something like this:

Anyways, the only problem about using the flush() function that is built in to your iostream library is that it doesnt skip a line like endl does

Of course, there's nothing stopping you from flushing the output explicitly using the flush() function - if that's all you want to do. However, if you're going to add a newline anyway, you might as well use std::endl. It's a matter of using the best tool for the job at hand.

I bet he does what any half-smart programmer would do which is put all of the using std directives into a header file and then before he writes his programs, he just types in #include "directives.h" and viola he has everything he wanted (if you read my tutorial about the preprocessor you would know how to do this )

Half-smart programmer should be the defining quote because a smart programmer would not put all their std dependencies in an include file. The compile time of the project will increase significiantly because header files such as set, iostream, map and string are extremly huge. The compiler will need to read and parse each file each time it is included and determine if it is needed or not. Additionaly if your program is including the iostream header when the program does not make use of it the size and intialization time of the program will also increase because the cout, cin, and cerr objects need to be constructed. Furthermore, if you have a list of 'using std::' and have a particulary large project you may end up making use of quite a few parts of the stl. This will make you're include file resolve many of the names of the standard namespace and will have the averse effect of 'using namespace std' which even yourself advocate against.

smart programmer would not put all their std dependencies in an include file. The compile time of the project will increase significiantly because header files such as set, iostream, map and string are extremly huge.

?? iostream.h is a file that is on your computer and if you have ever gone into it or done anything with it you would notice that it is about 15000 lines of code so it is pretty big. If i were to create a file called directives.h and i was to add all of the std namespace's directives in it, i would have about 10 lines of code in the file directives.h. So you tell me...will an extra 10 lines of code really slow down your compile time? SURE!! by 10 miliseconds max...plus any half-smart programmer would know that compile time doesnt matter at all. it is how fast the program will run...not how fast it will compile on your computer. once you distribute your .exe your clients wont have to compile anything in C++ because it is already in computer language.

Additionaly if your program is including the iostream header when the program does not make use of it the size and intialization time of the program will also increase because the cout, cin, and cerr objects need to be constructed

you better hope that you are using the iostream header file in your code or there is no point in using any std namespace's directives...

Furthermore, if you have a list of 'using std::' and have a particulary large project you may end up making use of quite a few parts of the stl

Agreed. But hardcore C++ programmers who put their class declarations into a header file run into the same problem right?

This will make you're include file resolve many of the names of the standard namespace and will have the averse effect of 'using namespace std' which even yourself advocate against

well if you have seen any of my code, i use the using namespace std declaration because i dont give a **** about compile time...it is run time that matters...i kinda see the stand you are taking but my directives.h file is a lot shorter than my iostream header file so therefore it isnt quiet the same thing but i do believe that my compile time will probably be increased by a couple of miliseconds...but then who cares? unless you work in a open source enviroment, people wont take your code and try to compile it. they will just get your executable and run that.

Anyways, why do i use the using namespace std; declaration and not all of the namespace directives? because it is a lot safer form of programming. instead of the using std directives the declarations has everything in it and therefore i can never leave something out. Anyways...hope this post made some sense

Originally posted here by White_Eskimo
plus any half-smart programmer would know that compile time doesnt matter at all. it is how fast the program will run...not how fast it will compile on your computer. once you distribute your .exe your clients wont have to compile anything in C++ because it is already in computer language.

It is obvious that you havn't worked on any real projects before because you would understand that compiling can take over a day to complete. The company I work at has strick standards on what can be included and resolve in a .cpp file and a .h file. On another note, run-time speed doesn't dictate how you should be programming either. Unless the constraints of the program specifically state that it must run with a certain time-effeciency then there is no need to be worrying about petty details like speed. If it runs fast enough that the user can not notice it is fast enough.

Anyways, why do i use the using namespace std; declaration and not all of the namespace directives? because it is a lot safer form of programming. instead of the using std directives the declarations has everything in it and therefore i can never leave something out. Anyways...hope this post made some sense

No it is not safer because you run into the name clashing problem that was so prominent in C and was gratefully fixed. I've noticed a few other posters mention quite a few times about what can happen via using namespace std. So I'm not sure if I should make the effort on beating a dead horse with its own leg but it seems that I will have to because you are set in your ways. The standard template library is quite enormous and was designed by a programmer Alex Stepanov who was years a head of his time in program design. He understood that classes should only be used when the problem calls for a behavior or a state. He also knew that he should not be gobbing together all the functions, classes, and variables into one gigantic class std because C++ was so nice to give us the namespace it lacked from before. The std namespace keeps all the functions, classes, and variables inside it's own namespace and prevents them from polluting the global namespace. When you use 'using namespace std' it resolves all of the functions variables, and objects into the global namespace. The consuequences of name clashing will pop-up in any medium to large scale project. Where you import a library(or even use your own) that has a common function, class or variable that is also included in the stl and the choice to use which becomes ambiguous. This can lead to hard to track run-time errors of certain functions behaving differently then expected and was a major leading reason that namespaces were introduced to the language.

Just my two cents on the whole std debate. While I agree it's better programming practice to use std::cout and such, I also know it's a hassle for many beginning programmers to have to worry about it. I teach some introduction to computer science courses at my campus and we just started using the "using" statement last semester because the CS department felt that they should move towards the new standard all at once. In doing so, I was given the choice on how I wanted to teach the material. I decided to stick with "using namespace std" because, that way, the students don't need to know exactly which directives are stored in the std library. It's much like, at least for the first course, I tell them to just get in the habit of always putting #include<iostream> so it gets them in the habit of using the include statement. Not until the second semester do we go into details on what is included in each of the header files and which they need to include for what purposes. That's the reason I teach them to simply us "using namespace std" as well. It gets them in the habit of having the using statement at the top of their programs. Some of you may disagree with this method of teaching, but, given many of the students I've had in the class, it's amazing that some of them even get their programs to compile, let alone execute successfully.

That's a fine way of teaching it, and that is what I still use. I still don't understand the reason why you wouldn't use namespace std? I would love to know in depth your reasoning behind this (It's good programming practice isn't what I'm looking for).

You shall no longer take things at second or third hand,
nor look through the eyes of the dead...You shall listen to all
sides and filter them for your self.
-Walt Whitman-

Originally posted here by avdven
That's the reason I teach them to simply us &quot;using namespace std&quot; as well. It gets them in the habit of having the using statement at the top of their programs. Some of you may disagree with this method of teaching, but, given many of the students I've had in the class, it's amazing that some of them even get their programs to compile, let alone execute successfully.

AJ

IMO the first thing you should be teaching your students is the rules of scope and namespaces can fall quite easily into one of these lectures. Honestly you can tell me students wont understand it but it's hogwash. Check out Koeing & Moo both are excellent C++ teachers and authors of the now infamous accelerated C++ book. They both advocate learning how to properly resolve identifiers into scope from the beggining. Though this is just an opinion - teach your class the way you want but it seems to me that Koeing & Moo have a pretty succesful carear.

Originally posted here by Lansing_Banda That's a fine way of teaching it, and that is what I still use. I still don't understand the reason why you wouldn't use namespace std? I would love to know in depth your reasoning behind this (It's good programming practice isn't what I'm looking for).

Many posters have already mentioned the pitfalls in fact I mentioned it myself in my last post.

It is obvious that you havn't worked on any real projects before because you would understand that compiling can take over a day to complete.

Not all "real" projects take days to compile.

The company I work at has strick standards on what can be included and resolve in a .cpp file and a .h file.

Not everyone at AO works for the same company as you though, so we're not bound by their standards.

On another note, run-time speed doesn't dictate how you should be programming either.

I think you'll find that run-time speed does dictate how you should be programming. Remember Quake? Some of the most frequently-called routines in that program had to be written in assembly to make it run at an acceptable speed, even though the rest of the game was written in C. So run-time speed not only dictates how you should program, but what language you should use to do so.

Unless the constraints of the program specifically state that it must run with a certain time-effeciency

Since a lot of programs run in real-time (i.e. they have to respond to an event within a given time, whether that be 1 millisecond or three hours), the constraints of most programs will specify that they must run with a certain time-efficiency.

there is no need to be worrying about petty details like speed

Since when has speed ever been a petty detail in computing? If speed was so insignificant, why on earth would Intel and AMD be spending millions of pounds creating faster and faster processors (especially the new 64 bit range)? They'd only sell these products if consumers (particularly businesses and research institutes) wanted their programs to run faster. Admittedly, processor speed isn't the only factor in speed of execution, but it sure as hell makes a difference.

No not all but if you go and work at any software house that produces software of any reasonable size it will. I can't think of one project that I've worked on in the last four years that wasn't compiled overnight. This is just my experience though working on industry strength software.

Not everyone at AO works for the same company as you though, so we're not bound by their standards.

Did I say you were to be bound by my company standards? The only reason I mentioned my companies standards is to give a real life example on why you shouldn't be using unused dependencies.

I think you'll find that run-time speed does dictate how you should be programming. Remember Quake? Some of the most frequently-called routines in that program had to be written in assembly to make it run at an acceptable speed, even though the rest of the game was written in C. So run-time speed not only dictates how you should program, but what language you should use to do so.

Sure I remember quake, I also remember the days when there wasn't parallel processing and all graphics rendering was done in software on the CPU. Though today it is different with CPU speeds reaching 4 ghz and programmable processors on the video card's, games are reaching lighting speeds. Do you know that most games spend 90% of their time in the graphics card? That leaves the rest of the time for physics and AI, which in the past has also had to share its time with graphics rendering. Because of this games like the bandicoot series have come out that are programmed in an even higher-level language LISP. LISP allowed the programmers to write code much faster and prototype new ideas that allowed the game to get out on budget and in time. This is a feet that most games written in C++ cannot compete with. A majority of games today that are shipped use a pre-written graphics library, physics and AI library. Their concerns are not with speed anymore and they are more worried about game design, game play and meeting deadlines.

Of course there are exceptions to this, some programs are written to be as fast as possible such as the implementers of the graphics, physics, AI library, embedded devices that are triggers for events. But these projects are more rare then they are common the typical project makes use of 3rd party libraries that were written once and are already as fast as they need it to be. It is very typical for new programmers to worry about the execution speed of a program before it is written. This should be left to the profiler to decide what needs to be executed quicker and most of the time it can be solved by implementing a more efficient algorithm and not by inlining assembly.

In this era of computing, programmers time and program stability has become more of a priority in a majority of applications even in games then speed has been.