17:37:15pjblimiting yourself to 64-bit is not sane, it's insane and restrictive. IPv6 addresses are 128-bit, and in crypto, we often need even bigger numbers…

17:37:32elderKpjb: nil signals that the byte order is set dynamically, rather than hardcoded into the integer type itself. If byte-order is nil, the effective byte-order is determined by the current value of hte *default-byte-order* variable.

17:40:22pjbso, yes, % is used for "dangerous" functions, not just non-exported functions. Normal functions such as convert-from, convert-to, range, etc don't need it.

17:40:57elderKpjb: Even though they are not intended to be called by clients?

17:41:18pjbThe "intended to be called by clients" is indicated by exporting the symbol.

17:43:02elderKWhat is the convention for things that say, are not meant to be used by "general users", but can be used for "developers." Like, someone who's just using the library vs. those extending it

17:51:33pjbwrite-into has an implicit "parameter" which is the size of the integer written. It's specified by the type parameter. But since it takes an offset parameter, it would be better to return the size (or the new offset), rather than the value that has been written.

17:56:37sukaetoI sometimes put a . in front of symbols that name things that would otherwise be exported (e.g. slots whose accessor is named by the same symbol as the slot would be, if I didn't stick the . in the slot name)

17:57:05pjbLet's take an example. You implement a hash-table. There's a vector of buckets, and a function that finds the index to the right bucket. Not all buckects are filled with valid data. So if you use aref on the vector of buckets randomly, you could find invalid data and get bad results or behavior. Then you could have an accessor (%bucket ht index) that would let you use all the slots, but normal code should use (bucket-for-key ht

18:00:27pjbFor internal stuff, don't hesitate. If your shadowing symbol is exported, consider how your package will be used. If it's used with CL (or the package owning the symbol you shadowed), then you may consider defining a version of the CL package with your symbols instead, otherwise the user will have to use (:shadowing-import-from :your-package :s1 :s2 :s3 …)

18:00:43sukaetoelderK: well, on the one hand, I don't want to use something that has an understood meaning for a different meaning. On the other hand, I've never seen anyone else use a . prefix in Lisp - I kind of stole it from UNIX's hidden file model. So people may read that code and be like "wtf was this guy thinking?"

18:01:24elderKsukaeto: I always associated % with "private or internal" and ! as "BEWARE"

18:01:25pjbsukaeto: . is often used in examples showing how to implement standard operators, to avoid shadowing (or in scheme where no such shadowing exists).

18:04:49elderKpjb: Other than the issues you have raised, are there any other major things?

18:06:27pjbSo, for structure, you are basically re-implementing a simple MOP. Not bad. But eventually, perhaps you will learn about CLOS and the MOP, and then you will be able to implement your binary structures directly as a CLOS metaclass. This would give a better integration with CL.

18:07:36elderKpjb: I spent several days studying MOP and I have completed reading the book Jachy suggested. The main reason I avoided using the MOP here, was simply because it seemed that it would be simpler to do so.

18:11:47elderKpjb: I was more meaning like... CL defines classes for say, integer and cons and stuff. I was wondering if I could do the same, so that people could specialize on "primitive" binary types like unsigned integers and stuff.

18:12:41elderKpjb: Thank you for taking the time to critique my work. :)

18:13:14sukaetoelderK: and you may be right about that! The only reason I spoke up is because I'm pretty sure I'm doing it wrong (or at least, what I'm doing is less than ideal), and I wanted to see what other people in the channel would say.

18:13:18elderKJachy: I found it a very interesting read. I especially enjoyed the contrasting of various OO approaches.

18:14:16elderKsukaeto: I was curious if there was a standard convention for "I wanna call this variable SOMETHING but I can't, because it's shadow something else I need. So, I need to name it differnetly..."

18:14:19pjbWell, you use defclass to define some classes. So there are type designators of same name to designate their type already. The question would be if your classes weren't CLOS classes (or CL structures). How would you write (deftype foo () ?) You can always use satisfies I guess.

18:14:31sukaetohow's the old joke go? There are two hard problems in programming: Cache invalidation, naming, and off by one errors

18:15:37elderKpjb: I was more asking about how I could integrate my own "primitive types" just as things like "integer" and "list" are with CLOSs. Like, so you can specialize on something that is not a class instance, but a primitive.

18:18:49elderKIt would be nice if I could do the binary-parsing stuff without parsing integers and things byte by byte. But even so, doing it byte by byte avoids potential alignment issues if I were to use, say, CFFI to read integers...

18:18:55elderKAnd it also allows me to support a larger variety of potential sizes.

18:43:03pjbThe question is when you want to do the conversion between lisp objects and binary objects.

18:44:02elderKWell, the idea I was going on was that "binary types" were just... restricted or otherwise limited "normal Lisp types." The actual serialization of values is done as late at possible.

18:44:36pjbIn lisp you can manipulate the descriptor, and convert the value on-demand when needed. Or you can convert into a DOM (to keep the ontology of the binary type in lisp). Or you can convert into lisp objects (but then it's more complicated to make the correspondance between the two type ontologies, you may have to add descriptors, or you may have some ambiguities.

18:49:16pjbWell, if the boxing can store directly into the binary structure, it can still be efficient. But it depends on the operations you do. If you need to perform algorithmic operations on this data, it's better to keep it as pure lisp data, and to convert only for I/O.

18:50:43elderKThat, at least to me, makes it more important to avoid any unnecessary boxing and things.

18:50:51pjbThe other alternative, is to have your own class, so you can subclass an integer class, but then you will need an accessor when you want to use the value in lisp, because there are funcallable objects, but not "artimeticable" objects, or "sequencable" objects (but some implementations have extensible sequences).

18:50:59elderKThis would be used to say, easily parse say, an inode structure.

18:51:58pjbelderK: you may have a look at com.informatimago.common-lisp.heap.heap ; there, I implemented operations on the binary types to keep them in the binary store, instead of doing the operation in lisp.

18:53:19elderKThe idea I have in mind is that computation is done on the /deserialized/ values. So, the Lisp values. When you operating on the stuff, you are not directly messing with the "binary" stuff. There's a very clear separation: You are always working on Lisp values. But you can convert binary to LIsp, and Lisp to Binary.

18:54:18elderKAs for the cost of boxing, it really depends on how smart implementations are. If you have a class, say, with a "primitive" slot, you'd hope that would be stored directly in the class instance, if at all possible.

18:54:20pjbThen you can easily copy those url from the (local) log file.

18:54:39elderKNo, it doesn't. But it's okay - I have booked your Github in general.

18:55:44elderKpjb: It really depends on how storage is handled. I guess you could for instance, create a special metaclass solely for "primitives" And make it so they store their values in-line, if possible.

18:58:43pjbYou may also have a look at ASN.1 They have the same problem. Or even JSON.

18:58:51elderKProblem is, you'd wind up having two translation steps. First, deserializing from whatever to the "object ontology" version of the thing. Then you'd have another stpe, where you "unboxed" everything you cared about.

18:59:32pjbie. binary is nothing special or primitive; it's just a given format with a given ontology, and we have to convert between two different worlds, with some in common and some quite different.

18:59:35BikeelderK: the problem with storing values unboxed is that the compiler has to cooperate in order to avoid unnecessary boxing/unboxing operations around accessors. this means the compiler has to be aware of the types of slots and accessors when compiling calls to accessors. this makes redefinition of the slot difficult (in general requiring recompilation of all code that calls the accessor)

18:59:36elderKI think for the time being, I will stick with the (write 'type ...) approach.

19:02:55elderKso that you have a layer that is like (write 'type value) which really just does the boxing and stuff behind the scenes.

19:05:02elderKI guess you could also define a standard protocol for "boxing" and "unboxing" things. Like, unwrap <ontological-instance> and wrap <ontological-type-name> value

19:08:25elderKI really do wonder how like, things like Genera and stuff /did/ do efficient binary IO and stuff. I mean, for processing network formats and all kinds of things, I imagine you'd want the mapping to be as simple and fast as possible.

19:08:37elderKAnd cutting down on garbage would also be important too, I imagine.

19:08:47elderKSo much to think about :) You've given me lots of ideas to digest.

19:16:59pjbI learned programming in the 70s, there were a lot of terminals and printers with only upper case then.

19:18:00pjbHonestly, this makes a difference only when used in implementations with a "modern" mode implemented badly. Ie. with allegro CL (mlisp instead of alisp). clisp has also a modern mode (per package) so it doesn't break.

20:06:24elderKXach: How long do you think it will be before package-local nicknames are universal? :D Alternatively, how important it is these days to really "support" implementations other than say, SBCL or ECL?

20:32:22elderKI have a question about the reader. Particularly with macro characters. The cylicness of the reader kind ofhurts my brain.

20:32:55elderKLike, let's say the reader... is reading. And the user has like, set a macro character. Depending on situation, the reader will invoke the function for that macro character, to handle the read.

20:33:13elderKBut is the "read table" in use by that "macro character function" the same as the read table that was in effect at the time of reading?

20:33:35elderKOr is the read-table in effect in the macro function, the one that was in effect when the macro function was set / defined?

20:33:48elderKOr does it depend on whether it's an "interpreter", rather than a compiler?

20:34:33elderKIt kind of seems like, to "read", you need a functional implementation, because of reader macros...

20:44:41BikeelderK: it just uses what's in *readtable*, so basically what's in effect at time of reading.

20:45:29White_Flamea reader macro can either recursively call READ which uses the readtable & normal reader rules, or it can consume character by character and do whatever it wants locally

20:45:56White_Flameobviously the latter is the terminal recursion case

20:51:57White_Flamepfdietz: sure, but that's not what deals with the input stream & dispatching on characters. It's Lisp code, it can do whatever it wants ;)

20:52:01pfdietzI am somewhat ambivalent about customizing the reader. It makes it harder to write programs that grovel over general code.

20:52:28White_FlameI have been noodling with concepts for a more purely declarative lex/parse

20:53:15elderKI just wonder how such a... reader is implemented. Like, let's say we "compile" our stuff to some form we can more easily execute. Say, byte code or something. Or maybe we just, literally remember the AST and walk it to execute.

20:53:49elderKSo, we'd have to have a table that'd contain a pointer to the function to be invoked to do the reading. Since that can be redefined, we have to be able to decide whether to call some built-in compiled function to do it. Or, run the "potentially interpreted one"

21:03:20elderKI'm just trying to... match things up. I mean, how can reading be completely independent from... stuff. It can't just be a "typical lexer / scanner" because macro characters require us to invoke some function, that is potentially set by user.

21:03:47elderKso, in an interpreter, that would require us to then interpret that macro function so to continue the read.

21:11:48no-defun-allowedFor example, the Unix Hater's Handbook compares it to the contextual grammar of C++, calling Lisp's parser "recursive descent, meaning you can write it on a piece of paper" from memory.

21:13:00elderKBike: Right. My main problem isn't with the fact that, lexing/parsing is basically "all in one." I've seen lexerless parsers.

21:13:20elderKMy problem is with the fact that stuff that handles lexing, is well, dynamic.

21:13:36elderKThen again, I'm used to kind of static table-based lexers and parsers and things.

21:14:10Bikebeing "dynamic" pretty much just means the table is an object in memory rather than an off-line thing.

21:14:55elderKYeah, I get that. I'm just thinking of the necessity to have some way to actually read code in the first place, so that the user's code that sets a new reader macr-function can be understood and patched in :P

21:15:36Bikewhen you start sbcl or whatever, it doesn't read in all the code again or anything.

21:16:34elderKBike: Which is to say, when you start the Lisp implementation - say from the standard image - everything is in place to read and understand standard Lisp. If it reads stuff, evaluates stuff, that changes that, well, okay, because it started off with the base syntax.

21:25:51elderKI also need to learn how to do ... like, usual lexing in Lisp. I'm used to implementing lexers in C using transition tables and stuff. Basically state[current_state][input_category] is the next state.

21:46:41elderKThere's so much to learn - and I'm uncertain of the order :)

21:47:02elderKI am fortunate to be in contact with so many knowledgable people, though :)

22:37:25pjbelderK: about *readtable* and reader macros, note that *readtable* is a special variable, therefore it has dynamic scope. lexical = WHERE, dynamic = WHEN. So the question is WHEN the reader macro is executed, what binding has the *readtable*. Of course, if the reader macro function is called thru the *readtable*, by reading the macro character, at that time, the *readtable* is bound to the readtable that contains the mapping of

22:39:36pjbNotice how the reader macro functionr ead-objcl-expression binds the *readtable* to *objc-readtable*.

22:40:43pjbelderK: the syntax is [object messageWith: (foo) andWith: bar] where messageWith:andWith: must be read case sensitively amongst other things.

22:42:28pjbelderK: but if object is not a special identifier super, then it must be read as a lisp expression: [(if foo obj1 obj2) getIt] hence the other binding to *lisp-readtable*.

22:43:56elderKpjb: Of course, when the macro function is compiled, the syntax that is in effect... then, is set. Like, just beacuse you change the readtable, doens't mean the macro function suddenly gets recompiled, right?

22:47:32pjbelderK: And yes, you can also mutate the readtable bound to *readtable* itself while reading. Ie. you can write a reader macro function that will set macro-chracters while reading.

22:47:52pjbelderK: Probably a good track to follow for an obfuscated lisp contest…

22:48:56White_Flamepjb: I think elderK means when the source code to the reader is compiled

22:48:58pjbWhite_Flame: However, the syntax of LISP (1959) was very simple. Basically, a lexer to read symbols, strings and numbers, and a parser for the sexp syntax.

22:49:58pjbWhite_Flame: elderK: yep, when the source code is compiler, it's the readtable that is bound to *readtable* in the compilation environment that is used. It can be the standard readtable, or something completely different. For example, if you write your reader macro in vacietis!

22:52:58pjbWhite_Flame: and in CL, the lisp reader is clearly defined in two parts: the basic lisp reader algorithm specifies the lexer (which is a tad more complex than what you'd do in general with lex (but you could do it in lex, there are states)), and then it defines the standard reader macros of which there are two kinds: macros such as #\" scan tokens such as strings, and macros such as #\( parse s-exps. Again, not much syntax here (