From darren at DarrenDuncan.net Fri Nov 3 23:11:13 2006
From: darren at DarrenDuncan.net (Darren Duncan)
Date: Fri, 3 Nov 2006 23:11:13 -0800
Subject: [VPM] LAST MINUTE ANNOUNCE: Sat, Nov 4th is November RCSS meeting!
Message-ID:
It appears that those of you not on the reccompsci at googlegroups.com
list may not yet have heard of this, since the announement may not
have been forwarded yet. So if you actually are interested, sorry
for the last minute notice.
--------------
Date: Mon, 23 Oct 2006 01:02:10 -0700
From: "Peter van Hardenberg"
To: reccompsci at googlegroups.com
Subject: [reccompsci] November RCSS meeting!
Reply-To: reccompsci at googlegroups.com
Sender: reccompsci at googlegroups.com
Mailing-List: list reccompsci at googlegroups.com;
contact reccompsci-owner at googlegroups.com
List-Id:
List-Post:
List-Help:
List-Unsubscribe: ,
Sirs and Madams,
the hour approaches once again. The RCSS meeting for November will be held on
**SATURDAY**, November 4th at 7:00PM
UVic ECS 660 (top floor conference room)
This shocking deviation from the Tuesday norm will allow our guest
speaker Peter Andrews to attend from Vancouver where he toils
endlessly for Blue Castle Games. The schedule stands as follows
(subject to amendments if I've forgotten anyone or mucked up the
descriptions):
The Schedule
----
* Fresh Research: Raytracing developments -- ray bundles and ray
caching. With modern graphics cards capable of pushing millions of
polygons per second, it is becoming increasingly common that scenes
are composed of sub-pixel polys. Ray-tracing algorithms scale better
with scene complexity than polygon-based algorithms. Some pundits are
predicting the future will be rendered one pixel at a time. (Ryan
Nordman)
* FYI: Roll Your Own Relational Database. Many applications can
benefit from the principles of relational databases, even when a
fullblown RDBMS such as Oracle would be inappropriate. In part of a
two talk series partnered with the Victoria Perl Mongers, Darren
Duncan will give an introduction to the theory behind a relational
database. Part two (at this month's Victoria.pm)
will focus on implementation, so when your apetite is whetted by part
one, the followup will satisfy.
* Guest Speaker: Frustrum Culling of Axis Aligned Bounding Boxes.
Efficient scene rendering is often more about what you throw away
than what you keep. In an industry where 16.5ms is all you get (and
you have to share it with those hopeless AI guys) every cycle counts.
Peter Andrews, former President of the UVic Games Club visits from
Blue Castle Games, and has also offered to share his experience with
finding employment in the entertainment software industry.
What's coming up?
----
December's guest speaker will be UVic's own Dr. George Tzanetakis.
George is a pioneer in the field of Music Information Retrieval, or
computer listening. His research includes algorithms which can
categorize music into genres, or recognize different recordings of
the same song.
January's guest speaker will be Dr. Ulrike Stege. Ulrike's research
is in the field of Parameterized Complexity, and she will provide an
introduction to the field by explaining its application to solving
Minesweeper algorithmically. Unfortunately, she has not promised a
solution to the problem of what you will do with all the time you
have left over as a result.
See you all at the meeting on SATURDAY,
-pvh
--
Peter van Hardenberg
Victoria, BC, Canada
From Peter at PSDT.com Tue Nov 7 10:24:10 2006
From: Peter at PSDT.com (Peter Scott)
Date: Tue, 07 Nov 2006 10:24:10 -0800
Subject: [VPM] Nov 21 meeting
Message-ID: <6.2.3.4.2.20061107102018.02714980@mail.webquarry.com>
Anyone got any proposals for the November meeting?
I keep thinking that there are some people who want to hear basic
information instead of bleeding edge stuff. We're here for those
people as well. If anyone would be interested in any of the following,
please let me know which one(s):
Introduction to hashes
Introduction to regular expressions
A tour of array operations in Perl
How to construct Perl subroutine libraries
Introduction to object-oriented programming in Perl
Basic CGI programming in Perl
How to find and install CPAN modules
--
Peter Scott
Pacific Systems Design Technologies
http://www.perldebugged.com/
http://www.perlmedic.com/
From darren at DarrenDuncan.net Wed Nov 8 00:49:09 2006
From: darren at DarrenDuncan.net (Darren Duncan)
Date: Wed, 8 Nov 2006 00:49:09 -0800
Subject: [VPM] Nov 21 meeting
In-Reply-To: <6.2.3.4.2.20061107102018.02714980@mail.webquarry.com>
References: <6.2.3.4.2.20061107102018.02714980@mail.webquarry.com>
Message-ID:
At 10:24 AM -0800 11/7/06, Peter Scott wrote:
>Anyone got any proposals for the November meeting?
I think it has already been widely disseminated that I will be
talking about my Perl 5 database project, QDRDBMS, featuring a
walkthrough of whatever working code I have so far. I touched on an
introduction at the Nov 4th RCSS meeting, but the Nov 21st VPM
meeting was to be the "show me the code" meat.
Note that I will need to use someone else's laptop since I currently
lack my own. I will email the list sufficiently in advance of the
meeting with notices about what ought to be downloaded from where.
Actually, I'll just tell you now.
See: http://darrenduncan.net/QDRDBMS/
That contains the newest versions of the code files that actually do
something, even if they are in progress. In particular, Value.pm is
the most interesting right now; it implements a bunch of strong data
types, of both the scalar and collection variety, and all type
conversion is explicit; this does form the foundation for anything
else.
So just download whatever you find in that folder as late as you can
before the meeting, but I will endeavour to not change it on the day
of the meeting itself.
Presuming my code actually executes (I will endeavour to make that
minor genealogy app work over it that I demo'd over SQLite in the
past), it has no external dependencies but what is bundled with Perl
5.8.1+ itself.
>I keep thinking that there are some people who want to hear basic
>information instead of bleeding edge stuff. We're here for those
>people as well. If anyone would be interested in any of the following,
>please let me know which one(s):
>
>Introduction to hashes
>Introduction to regular expressions
>A tour of array operations in Perl
>How to construct Perl subroutine libraries
>Introduction to object-oriented programming in Perl
>Basic CGI programming in Perl
>How to find and install CPAN modules
If we want to talk about other things too, I can share the time with
other speakers. But otherwise, I think I could easily take up the 2
hours.
-- Darren Duncan
From Peter at PSDT.com Wed Nov 8 17:28:37 2006
From: Peter at PSDT.com (Peter Scott)
Date: Wed, 08 Nov 2006 17:28:37 -0800
Subject: [VPM] Nov 21 meeting
In-Reply-To:
References: <6.2.3.4.2.20061107102018.02714980@mail.webquarry.com>
Message-ID: <6.2.3.4.2.20061108172748.0253d208@mail.webquarry.com>
At 12:49 AM 11/8/2006, Darren Duncan wrote:
>If we want to talk about other things too, I can share the time with
>other speakers. But otherwise, I think I could easily take up the 2
>hours.
We had a request for coverage of arrays and hashes. I'll defer those
to a later meeting. January by the look of it (figure we won't have a
meeting December 19).
--
Peter Scott
Pacific Systems Design Technologies
http://www.perldebugged.com/
http://www.perlmedic.com/
From jeremygwa at hotmail.com Thu Nov 9 20:34:26 2006
From: jeremygwa at hotmail.com (Jer A)
Date: Thu, 09 Nov 2006 20:34:26 -0800
Subject: [VPM] - data structures, performance and memory
Message-ID:
hi all perl gurus,
I am working on two projects where I can use some of your advice.
project 1: I am volunteering for a non-profit organization putting a printed
"information directory" on their website. It will consist of a search
engine, and an Index of categories, where one can narrow down, and browse,
if they do not wish to do a search. Due, to funding and other restrictions,
they cannot host an sql database, xml or webservices, as they use SHAW
hosting, So I will be doing this project using a flat file approach. should
I use many small flat files, or one big one? should I make a filesystem
directory for each catagory? or how can i associate keywords with the data
to be searched? What can i do to make this search engine as fast and
efficient as possible?
project 2: I am working on a project where I need to store data in memory.
What are the memory requirements of certain structures eg. hash,array,array
of anon hashes,array of packed data,array of strings, array of objects etc.
suppose I have a hundred records (structures) holding eg. age,name,address
etc. what is the best way of storing this data live in memory for a long
period time, for performance and efficiency.
An array of anon hashes is pretty, but may not be the best for performance,
and mem usage...am i right? Considering this, how can I store records with
elements that can be of different record types, saving the most amount of
memory as possible......can I do this with pack and unpack, I am not
familiar with this, but do you think s more efficient, than an array of
element delimited strings? or what if each record is an instantiated object
with properties?
Thanks in advance for your advice and help.
-Jeremy A.
_________________________________________________________________
Ready for the world's first international mobile film festival celebrating
the creative potential of today's youth? Check out Mobile Jam Fest for your
a chance to WIN $10,000! www.mobilejamfest.com
From darren at DarrenDuncan.net Fri Nov 10 02:11:40 2006
From: darren at DarrenDuncan.net (Darren Duncan)
Date: Fri, 10 Nov 2006 02:11:40 -0800
Subject: [VPM] - data structures, performance and memory
In-Reply-To:
References:
Message-ID:
At 8:34 PM -0800 11/9/06, Jer A wrote:
>hi all perl gurus,
>I am working on two projects where I can use some of your advice.
I'll see what I can do here.
>project 1: I am volunteering for a non-profit organization putting a printed
>"information directory" on their website. It will consist of a search
>engine, and an Index of categories, where one can narrow down, and browse,
>if they do not wish to do a search. Due, to funding and other restrictions,
>they cannot host an sql database, xml or webservices, as they use SHAW
>hosting, So I will be doing this project using a flat file approach.
The simplest solution I see, if your content is static and
infrequently changes, and you are staying with Shaw, is to just
pre-generate the index and content pages as static HTML and just dump
it on the server. Any code will run on your own machine, where you
can use a database. You can then use a specially crafted Google
link, one containing "site:" in the query, to just let Google provide
the search mechanism for your site. Lots of small (and larger) sites
use Google to do their search engine for them.
If your content is changed regularly and/or is interactive, I suggest
getting a different host. Shaw costs at least $20-50 per month,
AFAIK, and a dedicated web host with databases and such can be had
for as low as $10 per month, and you get your own domain name with
those too.
If neither of these is suitable for some reason, then please give
more information about how much data you have, what kind of data it
is, and about how much traffic you get, plus what features Shaw does
give you.
>project 2: I am working on a project where I need to store data in memory.
>What are the memory requirements of certain structures eg. hash,array,array
>of anon hashes,array of packed data,array of strings, array of objects etc.
>
>suppose I have a hundred records (structures) holding eg. age,name,address
>etc. what is the best way of storing this data live in memory for a long
>period time, for performance and efficiency.
>An array of anon hashes is pretty, but may not be the best for performance,
>and mem usage...am i right? Considering this, how can I store records with
>elements that can be of different record types, saving the most amount of
>memory as possible......can I do this with pack and unpack, I am not
>familiar with this, but do you think s more efficient, than an array of
>element delimited strings? or what if each record is an instantiated object
>with properties?
>
>Thanks in advance for your advice and help.
>
> -Jeremy A.
I think you may be getting into a "premature optimization" matter.
If you have only a hundred records of eg age,name,address, those will
all take up maybe 1 kilobyte at most of memory, which is negligible.
Don't bother fussing about memory unless you're storing tens of
thousands or more of such records, or you're working on an embedded
system. Just do whatever is easiest to program, as programmer
efficiency is usually more important than any other kind. Use hashes
if you're looking data up directly, such as by the person's name, and
arrays only when you plan to go through them sequentially. Or you
can include multiple references to the same data structures if you
need to.
-- Darren Duncan
From eric at dmcontact.com Fri Nov 10 09:28:20 2006
From: eric at dmcontact.com (Eric Frazier)
Date: Fri, 10 Nov 2006 09:28:20 -0800
Subject: [VPM] - data structures, performance and memory
In-Reply-To:
References:
Message-ID: <6.1.1.1.2.20061110090259.0501dff8@mail.dmcontact.com>
Hi Jeremy,
Kind of a philosophy thing maybe, but I would not tend to think it is bad
idea to optimize things as much as you can. Esp considering you will be
running this script on a shared hosting server. And I kind of have this
thing about answer the question someone asks, not telling them it is the
wrong thing to ask.
So assuming that what Jeremy wants to do is correct with the above in mind.
What would be the best way to go? Sure there isn't that much data, but it
doesn't take much to get expensive in that shared environment.
This looks interesting, but we go back to that shared server thing:
http://www.danga.com/memcached/
So what I would wonder about is would there be any benefit to using
storeable, or compressing the data first using something like
Compress::Zlib::memGzip
Which again might be an issue on something like Shaw.. I have winged
installing perl modules by FTP by compiling them on another machine first
So then would Storeable be of some use? Does it end up being more efficient
in size? I would think so from what I have seen.
This also might be worth looking at as I didn't know this about perl 5.8
Tie::Handle::ToMemory Since that wasn't possible since before 5.8 you could
look at how this guy did it..
The only other question I would have is why are you needing in memory
storage? Esp if this is being accessed through a CGI, you are screwed right
there. That bit I find a little confusing.
Eric
At 08:34 PM 09/11/2006, Jer A wrote:
>hi all perl gurus,
>
>I am working on two projects where I can use some of your advice.
>
>project 1: I am volunteering for a non-profit organization putting a printed
>"information directory" on their website. It will consist of a search
>engine, and an Index of categories, where one can narrow down, and browse,
>if they do not wish to do a search. Due, to funding and other restrictions,
>they cannot host an sql database, xml or webservices, as they use SHAW
>hosting, So I will be doing this project using a flat file approach. should
>I use many small flat files, or one big one? should I make a filesystem
>directory for each catagory? or how can i associate keywords with the data
>to be searched? What can i do to make this search engine as fast and
>efficient as possible?
>
>project 2: I am working on a project where I need to store data in memory.
>What are the memory requirements of certain structures eg. hash,array,array
>of anon hashes,array of packed data,array of strings, array of objects etc.
>
>suppose I have a hundred records (structures) holding eg. age,name,address
>etc. what is the best way of storing this data live in memory for a long
>period time, for performance and efficiency.
>An array of anon hashes is pretty, but may not be the best for performance,
>and mem usage...am i right? Considering this, how can I store records with
>elements that can be of different record types, saving the most amount of
>memory as possible......can I do this with pack and unpack, I am not
>familiar with this, but do you think s more efficient, than an array of
>element delimited strings? or what if each record is an instantiated object
>with properties?
>
>Thanks in advance for your advice and help.
>
> -Jeremy A.
>
>_________________________________________________________________
>Ready for the world's first international mobile film festival celebrating
>the creative potential of today's youth? Check out Mobile Jam Fest for your
>a chance to WIN $10,000! www.mobilejamfest.com
>
>_______________________________________________
>Victoria-pm mailing list
>Victoria-pm at pm.org
>http://mail.pm.org/mailman/listinfo/victoria-pm
From jeremygwa at hotmail.com Fri Nov 10 17:30:02 2006
From: jeremygwa at hotmail.com (Jer A)
Date: Fri, 10 Nov 2006 17:30:02 -0800
Subject: [VPM] - data structures, performance and memory
Message-ID:
Darren,
Thank-you for your reply.
the "hundred" records example was not a good one, I mean very large records
put together in a very large array.....(in the thousands).
what are the memory sizes for scalars,hashes,single-dim arrays,double-dim
arrays, array of anon-hashes -etc. how can i use as little memory as
possible, and how can i search these very large arrays efficiently.
Some pointers would be great, I don't need any examples.
are elements on a array, of variant data-type...does this general type
consume more memory than if the type is explicitly defined, if so, how can
I explicitly define the type, eg. int,string,bool in perl terms etc.
can operations on very large arrays eat up more memory, in execution?
how can I control perl's allocation of memory.
I am using Activestate perl 5.8 on a win32 host.
Thank in advance for your help.
-Jeremy A.
> >project 2: I am working on a project where I need to store data in
>memory.
> >What are the memory requirements of certain structures eg.
>hash,array,array
> >of anon hashes,array of packed data,array of strings, array of objects
>etc.
> >
> >suppose I have a hundred records (structures) holding eg.
>age,name,address
> >etc. what is the best way of storing this data live in memory for a long
> >period time, for performance and efficiency.
> >An array of anon hashes is pretty, but may not be the best for
>performance,
> >and mem usage...am i right? Considering this, how can I store records
>with
> >elements that can be of different record types, saving the most amount of
> >memory as possible......can I do this with pack and unpack, I am not
> >familiar with this, but do you think s more efficient, than an array of
> >element delimited strings? or what if each record is an instantiated
>object
> >with properties?
> >
> >Thanks in advance for your advice and help.
> >
> > -Jeremy A.
>
>I think you may be getting into a "premature optimization" matter.
>If you have only a hundred records of eg age,name,address, those will
>all take up maybe 1 kilobyte at most of memory, which is negligible.
>Don't bother fussing about memory unless you're storing tens of
>thousands or more of such records, or you're working on an embedded
>system. Just do whatever is easiest to program, as programmer
>efficiency is usually more important than any other kind. Use hashes
>if you're looking data up directly, such as by the person's name, and
>arrays only when you plan to go through them sequentially. Or you
>can include multiple references to the same data structures if you
>need to.
>
>-- Darren Duncan
>_______________________________________________
>Victoria-pm mailing list
>Victoria-pm at pm.org
>http://mail.pm.org/mailman/listinfo/victoria-pm
_________________________________________________________________
Ready for the world's first international mobile film festival celebrating
the creative potential of today's youth? Check out Mobile Jam Fest for your
a chance to WIN $10,000! www.mobilejamfest.com
From jeremygwa at hotmail.com Fri Nov 10 17:35:05 2006
From: jeremygwa at hotmail.com (Jer A)
Date: Fri, 10 Nov 2006 17:35:05 -0800
Subject: [VPM] - data structures, performance and memory
In-Reply-To: <6.1.1.1.2.20061110090259.0501dff8@mail.dmcontact.com>
Message-ID:
hello Eric,
Thanks for your reply.
>The only other question I would have is why are you needing in memory
>storage? Esp if this is being accessed through a CGI, you are screwed right
>there. That bit I find a little confusing.
the memory storage is for project 2. the cgi involves project 1.
project 1 and 2 are totally un-related.
-Jeremy A.
>From: Eric Frazier
>To: "Jer A" ,victoria-pm at pm.org
>Subject: Re: [VPM] - data structures, performance and memory
>Date: Fri, 10 Nov 2006 09:28:20 -0800
>
>Hi Jeremy,
>
>Kind of a philosophy thing maybe, but I would not tend to think it is bad
>idea to optimize things as much as you can. Esp considering you will be
>running this script on a shared hosting server. And I kind of have this
>thing about answer the question someone asks, not telling them it is the
>wrong thing to ask.
>
>So assuming that what Jeremy wants to do is correct with the above in mind.
>What would be the best way to go? Sure there isn't that much data, but it
>doesn't take much to get expensive in that shared environment.
>
>This looks interesting, but we go back to that shared server thing:
>http://www.danga.com/memcached/
>
>So what I would wonder about is would there be any benefit to using
>storeable, or compressing the data first using something like
>Compress::Zlib::memGzip
>
>Which again might be an issue on something like Shaw.. I have winged
>installing perl modules by FTP by compiling them on another machine first
>
>So then would Storeable be of some use? Does it end up being more efficient
>in size? I would think so from what I have seen.
>
>This also might be worth looking at as I didn't know this about perl 5.8
>Tie::Handle::ToMemory Since that wasn't possible since before 5.8 you could
>look at how this guy did it..
>
>The only other question I would have is why are you needing in memory
>storage? Esp if this is being accessed through a CGI, you are screwed right
>there. That bit I find a little confusing.
>
>
>Eric
>
>
>
>
>At 08:34 PM 09/11/2006, Jer A wrote:
>>hi all perl gurus,
>>
>>I am working on two projects where I can use some of your advice.
>>
>>project 1: I am volunteering for a non-profit organization putting a
>>printed
>>"information directory" on their website. It will consist of a search
>>engine, and an Index of categories, where one can narrow down, and browse,
>>if they do not wish to do a search. Due, to funding and other
>>restrictions,
>>they cannot host an sql database, xml or webservices, as they use SHAW
>>hosting, So I will be doing this project using a flat file approach.
>>should
>>I use many small flat files, or one big one? should I make a filesystem
>>directory for each catagory? or how can i associate keywords with the data
>>to be searched? What can i do to make this search engine as fast and
>>efficient as possible?
>>
>>project 2: I am working on a project where I need to store data in memory.
>>What are the memory requirements of certain structures eg.
>>hash,array,array
>>of anon hashes,array of packed data,array of strings, array of objects
>>etc.
>>
>>suppose I have a hundred records (structures) holding eg. age,name,address
>>etc. what is the best way of storing this data live in memory for a long
>>period time, for performance and efficiency.
>>An array of anon hashes is pretty, but may not be the best for
>>performance,
>>and mem usage...am i right? Considering this, how can I store records with
>>elements that can be of different record types, saving the most amount of
>>memory as possible......can I do this with pack and unpack, I am not
>>familiar with this, but do you think s more efficient, than an array of
>>element delimited strings? or what if each record is an instantiated
>>object
>>with properties?
>>
>>Thanks in advance for your advice and help.
>>
>> -Jeremy A.
>>
>>_________________________________________________________________
>>Ready for the world's first international mobile film festival celebrating
>>the creative potential of today's youth? Check out Mobile Jam Fest for
>>your
>>a chance to WIN $10,000! www.mobilejamfest.com
>>
>>_______________________________________________
>>Victoria-pm mailing list
>>Victoria-pm at pm.org
>>http://mail.pm.org/mailman/listinfo/victoria-pm
>
_________________________________________________________________
Find a local pizza place, music store, museum and more?then map the best
route! Check out Live Local today! http://local.live.com/?mkt=en-ca/
From jeremygwa at hotmail.com Sat Nov 11 08:38:53 2006
From: jeremygwa at hotmail.com (Jer A)
Date: Sat, 11 Nov 2006 08:38:53 -0800
Subject: [VPM] regular-expressions and variables
Message-ID:
hello all perl gurus,
I have another problem - a regular expression problem
how do i use scalar variables in substitution and complex matching?
eg I want the following to work.
$string =~ s/^$variable//;
$string =~ m/^([^$variable]*)/;
thanks in advance for your help.
-Jeremy A.
_________________________________________________________________
Say hello to the next generation of Search. Live Search ? try it now.
http://www.live.com/?mkt=en-ca
From darren at DarrenDuncan.net Sun Nov 12 20:03:19 2006
From: darren at DarrenDuncan.net (Darren Duncan)
Date: Sun, 12 Nov 2006 20:03:19 -0800
Subject: [VPM] regular-expressions and variables
In-Reply-To:
References:
Message-ID:
At 8:38 AM -0800 11/11/06, Jer A wrote:
>hello all perl gurus,
>
>I have another problem - a regular expression problem
>
>how do i use scalar variables in substitution and complex matching?
>
>eg I want the following to work.
>
>$string =~ s/^$variable//;
>
>$string =~ m/^([^$variable]*)/;
>
>thanks in advance for your help.
>
>-Jeremy A.
As far as I can tell, your examples already work. Observe:
Last login: Sun Nov 12 17:12:50 on ttyp1
Welcome to Darwin!
darren-duncans-power-mac-g4:~ darrenduncan$ perl
my $foo = 'abc';
my $bar = 'ab';
$foo =~ s/^$bar//;
print "foo is '$foo'\n";
foo is 'c'
darren-duncans-power-mac-g4:~ darrenduncan$ perl
my $foo = 'abc';
my $bar = 'b';
$foo =~ m/^([^$bar]*)/;
print "have $foo, matched is '$1'\n";
have abc, matched is 'a'
darren-duncans-power-mac-g4:~ darrenduncan$
That example is running under Perl 5.8.6, which is fairly recent.
What problem are you having?
-- Darren Duncan
From darren at DarrenDuncan.net Sun Nov 12 20:23:15 2006
From: darren at DarrenDuncan.net (Darren Duncan)
Date: Sun, 12 Nov 2006 20:23:15 -0800
Subject: [VPM] - data structures, performance and memory
In-Reply-To:
References:
Message-ID:
At 5:30 PM -0800 11/10/06, Jer A wrote:
>the "hundred" records example was not a good one, I mean very large records
>put together in a very large array.....(in the thousands).
>what are the memory sizes for scalars,hashes,single-dim arrays,double-dim
>arrays, array of anon-hashes -etc. how can i use as little memory as
>possible, and how can i search these very large arrays efficiently.
>
>Some pointers would be great, I don't need any examples.
Generally speaking, each Hash or Array used, whether anonymous or
not, uses more memory for overhead and per element than simple
scalars do.
AFAIK, a scalar variable uses about 20 bytes of memory overhead plus
its actual data, a hash or array are maybe 50-100 bytes per variable
overhead plus maybe 20-50 per element; a lot of the latter are
guesses. I do know that Hashes use more memory than Arrays for the
same number of value elements, maybe about 30% more overhead memory.
I would guess that, to save memory, if you can use a
single-dimensional array or hash rather than a 2-dimensional
structure built from other, saves memory, or if you have 2 parallel
single-dimension arrays or hashes will save memory compared to 1
array or hash of 2-element arrays or hashes.
>are elements on a array, of variant data-type...does this general type
>consume more memory than if the type is explicitly defined, if so, how can
>I explicitly define the type, eg. int,string,bool in perl terms etc.
I don't know if it is possible to specify strong types like
int/string/bool in Perl 5 without getting into Perl's internals or
using some third party module. (You can in Perl 6, but not that that
can help you now.) I've heard such a feature may have been added to
Perl 5 starting with 5.8+ or something, but if so then it is obscure
or I don't know where to look.
>can operations on very large arrays eat up more memory, in execution?
>how can I control perl's allocation of memory.
A foreach loop that iterates an array may be faster than a map which
produces a new array from an existing one ... or not.
But certainly, if you're processing a file, you want to read it onw
line at a time rather than slurp it. But that's easy.
-- Darren Duncan
From kencl at shaw.ca Mon Nov 13 04:09:55 2006
From: kencl at shaw.ca (Ken Clarke)
Date: Mon, 13 Nov 2006 04:09:55 -0800
Subject: [VPM] regular-expressions and variables
References:
Message-ID: <003601c7071c$a1c1a650$1000a8c0@kens>
Just keep in mind that the contents of $variable are interpreted as a regex
pattern. If all you want to do is match exact character sequences, quote
the variable IE s/^\Q$variable\E//; $variable can contain any valid regex
pattern.
You'll take a slight performance hit, since perl will recompile the pattern
every time it is used unless you use the /o (only compile pattern once)
modifier, as it has no way of knowing if the contents have changed or not.
What I usually do if I'm going to use a pattern multiple times within a
loop, and I know that the block containing the loop may be called multiple
times with different patterns, is compile the pattern then use the compiled
pattern within the loop itself.
EG
sub check_list_for_pattern {
my ($pattern, $list_ref) = @_;
my $compiled_pattern = qr/$pattern/;
for (@{$list_ref}) {
if (/$compiled_pattern/) {
# handle match found
} else {
# handle no match
}
}
hth
>> Ken Clarke
>> Contract Web Programmer / E-commerce Technologist
>> www.PerlProgrammer.net
----- Original Message -----
From: "Jer A"
To:
Sent: Saturday, November 11, 2006 8:38 AM
Subject: [VPM] regular-expressions and variables
> hello all perl gurus,
>
> I have another problem - a regular expression problem
>
> how do i use scalar variables in substitution and complex matching?
>
> eg I want the following to work.
>
> $string =~ s/^$variable//;
>
> $string =~ m/^([^$variable]*)/;
>
>
> thanks in advance for your help.
>
> -Jeremy A.
>
> _________________________________________________________________
> Say hello to the next generation of Search. Live Search - try it now.
> http://www.live.com/?mkt=en-ca
>
>
--------------------------------------------------------------------------------
> _______________________________________________
> Victoria-pm mailing list
> Victoria-pm at pm.org
> http://mail.pm.org/mailman/listinfo/victoria-pm
>
From jeremygwa at hotmail.com Mon Nov 13 14:45:58 2006
From: jeremygwa at hotmail.com (Jer A)
Date: Mon, 13 Nov 2006 14:45:58 -0800
Subject: [VPM] inline c and weird buggy-ness
Message-ID:
hi there perl gurus.
I am creating a module in inline c. I want to concat a very large string
fast. I am getting weird crashes for each run. Also, it appears that my
string is not being concatinated at all.
I am using VC6.0, perl 5.8 on win32 xp.
how can I get this to work.
your help is appreciated. Thanks.
-Jer A.
---------------------------------------------------------------------
#!Perl
package DSTRING;
my $code = <bind(C => $code);
1;
my $data = "";
my $count = 0;
for(1...100000)
{
print $count,"\n";
$data = DSTRING::addString($data,"testdata");
$count++;
}
print $data;
print "END\n";
_________________________________________________________________
Ready for the world's first international mobile film festival celebrating
the creative potential of today's youth? Check out Mobile Jam Fest for your
a chance to WIN $10,000! www.mobilejamfest.com
From jeremygwa at hotmail.com Mon Nov 13 17:18:45 2006
From: jeremygwa at hotmail.com (Jer A)
Date: Mon, 13 Nov 2006 17:18:45 -0800
Subject: [VPM] override perl-types
Message-ID:
hi all,
i have an idea, and I wonder if it can be done using just perl code.
I would like to override a datatype, so when that type is initialized,acted
apon,or manipulated,I can customize the code, and store the data in a
different way, eg. in memory or whatever.
eg. suppose I want @arr to be actually a concatinated string, but act as if
it is an array.
I hope you understand what I am getting at.
I am using perl 5.8.
Your help is appreciated. thanks a bunch.
-Jeremy A.
_________________________________________________________________
Find a local pizza place, music store, museum and more?then map the best
route! Check out Live Local today! http://local.live.com/?mkt=en-ca/
From darren at DarrenDuncan.net Mon Nov 13 17:41:15 2006
From: darren at DarrenDuncan.net (Darren Duncan)
Date: Mon, 13 Nov 2006 17:41:15 -0800
Subject: [VPM] override perl-types
In-Reply-To:
References:
Message-ID:
At 5:18 PM -0800 11/13/06, Jer A wrote:
>hi all,
>
>i have an idea, and I wonder if it can be done using just perl code.
>
>I would like to override a datatype, so when that type is
>initialized,acted apon,or manipulated,I can customize the code, and
>store the data in a different way, eg. in memory or whatever.
>
>eg. suppose I want @arr to be actually a concatinated string, but
>act as if it is an array.
>
>I hope you understand what I am getting at.
>I am using perl 5.8.
>
>Your help is appreciated. thanks a bunch.
Lookup the Perl 5 "tie" feature ... it will probably do exactly what
you want. -- Darren Duncan
From Peter at PSDT.com Mon Nov 13 18:08:47 2006
From: Peter at PSDT.com (Peter Scott)
Date: Mon, 13 Nov 2006 18:08:47 -0800
Subject: [VPM] Perl Mongers Meeting November 21
Message-ID: <6.2.3.4.2.20061113130139.0286bd58@mail.webquarry.com>
Victoria.pm will meet at its regular date, time, and place at 7:00 pm
on Tuesday, November 21, at UVic in ECS (Engineering Computer Science
building) room 660 (see http://www.uvic.ca/maps/index.html). (There
will be no December meeting. In January we will start visiting the
beginning-level topics that I received requests for.)
Darren Duncan will present a new homegrown RDBMS that is written in
Perl, including and overview of its features and design, featuring a
walkthrough of what implementation code exists so far, and also
hopefully showing a live demo of it in action, if it is far enough
along for that by the meeting date.
This RDBMS is intended to be a complete implementation of a truly
relational DBMS, as introduced by Edgar F. Codd and maintained by Hugh
Darwen and Chris Date. The project is fairly unique and has a
significant number of differences from SQL DBMSs, which are not truly
relational due to either missing some required features or due to
providing certain mis-features.
Note: Some useful background material can be seen at
http://en.wikipedia.org/wiki/Relational_data_model and
http://en.wikipedia.org/wiki/Relational_algebra .
Darren's RDBMS has a Perl 5 implementation named "QDRDBMS", which is
what will be walked through, and a Perl 6 implementation under some
other name will be made shortly thereafter or in parallel; QDRDBMS is
officially a prototype that is deprecated in favor of the longer-term
Perl 6 product.
Some key differences between QDRDBMS and a SQL DBMS are:
- QDRDBMS uses the terminology "relation value" (or "relation"),
"relation variable" (or "relvar"), "tuple" and "attribute" for concepts
analagous to SQL's terminology "rowset", "table", "row" and
"column". A "base relvar" and a "virtual relvar" corresponds to a
"table" and a "view"; "relvar" refers to both. A relational database
is a collection of relvars, analagous to a SQL database being a
collection of tables. An "update" operation includes not only
assignment but SQL's concept of "insert", "update", "delete", etc.
- QDRDBMS uses 2-valued logic rather than 3+-valued logic like SQL;
QDRDBMS does not have a native "null" concept. There are multiple
better alternatives to express "unknown" or "not applicable" etc with
it. In SQL terms, all relation attributes are implicitly not nullable.
- QDRDBMS always uses exceptions to indicate failure, whether due to
attempting a constraint violation or by trying to divide by zero; these
can be explicitly trapped and responded to as the user chooses. SQL
sometimes uses exceptions to indicate failure, and other times it just
returns null values instead.
- QDRDBMS is based on sets, rather than bags as SQL is; a tuple is an
unordered set of attributes, and a relation is an unordered set of tuples.
- All QDRDBMS attributes are named and must have distinct names; SQL
columns are ordered, do not have to be named or can have duplicated names.
- In SQL terms, a relation has an implicit unique key constraint over
all of its attributes, if it doesn't have an explicit key defined. A
truly relational database is always in at least first normal form by
definition.
- A QDRDBMS relation or tuple can validly have zero attributes while
a SQL table or row must always have at least one column.
- A QDRDBMS relation has no hidden attributes like SQL's "row id";
they only contain the attributes that the user defined for them. But
since relations contain no duplicates, you can reliably address their
elements by just their attribute values.
- All QDRDBMS operations involving relations are set-based; there are
no tuple-at-a-time operations like SQL's current-row cursors. (That's
not to say that the database-application gateway can't stream data.)
- QDRDBMS queries are composed of simpler relational operators
composed into arbitrary user-defined expressions, rather than the
monolithic and complicated SQL "select" operators.
- QDRDBMS users have the same kind of flexability with relational
expressions as they do with string or numeric or logical etc
expressions; SQL users typically have different syntax structures for
table or rowset operations than for other data types.
- Each QDRDBMS relational operator returns only distinct results (as
a relation value); the SQL relational operators are inconsistent, with
some returning distinct results by default (eg, "union"), while others
return duplicated results by default (eg, "join"). Note that list
summarizing operations like "sum" and "average" still do the correct
thing, despite the aforementioned being true for list-returning operators.
- QDRDBMS is strongly typed, with all values and variables being of a
specific type, and operators taking and returning values of specific
types. No implicit type casting is done, and all type conversions must
be done explicitly. How strong or weak data types are varies by the
SQL DBMS product, and many of them do implicit casting.
- QDRDBMS has distinct numerical division operators for integers and
fractional numbers, which return integers or fractional numbers
respectively. By contrast, some SQL DBMSs have just one behaviour, and
others will change behaviour semi-arbitrarily depending on what a
loosely typed input looks like; eg, using the number "3.0" may not do
what users expect.
- QDRDBMS data types can be scalar or collection types, arbitrarily
complex; relation or tuple attributes can be composed of either, and
attributes can even be relations or tuples. SQL DBMSs often don't let
you use collection types or tables for table columns, or their
provision is inconsistent.
- QDRDBMS uses the same relational operators for both database
relations and component relations of other data types. SQL requires
you to use different operators for each though they are alike; eg,
"union" vs "multiset union".
- QDRDBMS empowers users to define their own data types and
operators, while only some SQL DBMSs do (as "domains" or "types" or
"stored procedures" etc).
- QDRDBMS operators can be recursive, while only some SQL DBMSs allow this.
- QDRDBMS operators take named arguments rather than positional one,
so there is not only flexability but also consistency between those and
the way relation attributes are used, which is by name.
- Every QDRDBMS data type must have at least equality, inequality
testing and value assignment operators, even if it has no other
operators. SQL does not require data types to have equality test
operators or be assignable.
- Fundamentally, the only QDRDBMS update operator is the assignment
operator, and any other update operators (eg, "INSERT", "UPDATE",
"DELETE") are just short-hand for particular assignment expressions.
SQL doesn't provide assignment syntax for many of its update operators.
- All QDRDBMS data types are immutable; you only update variables as
a whole, sometimes by assigning a new value derived from the old value
but for the intended mutation. Short-hand operators can make it appear
different though.
- All QDRDBMS database constraints are immediate, and are applied
between statement boundaries at all levels of granularity. A SQL DBMS
constraint can be defined as deferrable, such as only being applied at
a transaction commit time. So QDRDBMS guarantees that no query or
expression will ever see a database that is in an inconsistent /
constraint-violating state.
- QDRDBMS empowers you to perform multiple variable assignments
simultaneously (which is syntactic sugar for a single assignment to the
database as a whole), so a multi-part change can be made without
tripping the immediate constraints. So it is possible to, eg, record a
credit to one bank account and a corresponding debit to another,
without the database being in an inconsistent state "between" those two
actions.
- Where at all possible, there is no distinction between base and
virtual relvars from the database user's point of view; they can query
and assign to / update either. The database can be redefined so that
some base and virtual relvars become virtual and base, and the user can
continue like nothing had changed.
- QDRDBMS provides system catalog relvars sufficient to completely
describe the entire database. This is analagous to the "information
schema" concept of SQL, which some SQL DBMSs provide and others
don't. The QDRDBMS catalog is AST-based, though, in that the various
details are atomic, rather than consisting of literal SQL strings for
many parts (eg, procedure definitions).
- The catalog relvars of QDRDBMS are user-updateable; where by
contrast they are read-only in SQL. In fact, updating the QRDBMS
system catalog using DML operations is the fundamental means to perform
data definition or schema changes to a database. While QDRDBMS
provides short-hands to this, analagous to SQL's "create table" or
"alter table" etc, they are short-hand. SQL's "create" etc statements
are the only normal way to do schema changes, and they are a lot less flexible.
- QDRDBMS supports arbitrary depth child transactions, where any
statements within a transaction level are collectively atomic and can
succeed or fail. Any statement block marked as atomic, and all named
routines and try-catch blocks are atomic; in the last case, a thrown
exception indicates a failure of the block. Individual statements are
implicitly the trivial case of a transaction.
- QDRDBMS is implicitly in auto-commit mode by default, where each
statement commits immediately on success. Defining an explicit
transaction is effectively just making a larger atomic statement. This
is the most consistent way of doing things when you support child
transactions. Some SQL DBMSs are this way too by default, while others
require explicit "commit" statements for anything to persist even when
no explicit transaction is started.
- In QDRDBMS, all operations are subject to transactions, including
updates of the schema itself, and can be rolled back. Some SQL DBMSs
(such as MySQL) implicitly commit certain operations even if in an
explicit transaction.
- The native query language of QDRDBMS is an AST, though a
string-query allowing wrapper is provided too, so users don't have to
generate query strings like with SQL, and they don't have to worry
about escaping, so there are no injection vulnerabilities with the AST
like with SQL.
- The QDRDBMS API is designed to be easily wrappable with alternate
or simplified interfaces that the users choose. It is a lot harder to
wrap a SQL DBMS.
- QDRDBMS is fully ACID compliant, while some SQL DBMSs are not.
Some key advantages of QDRDBMS over a SQL DBMS are:
- Better language consistency.
- Language ambiguity is removed.
- Language is better huffman coded.
- Language is a lot more flexible and capable.
- Different queries that are logically identical will return the same result.
- Queries are easier for the DBMS to optimize.
- The product is orders of magnitude easier to implement, providing
more functionality and reliability with a smaller footprint and less
complexity.
- There is no object/relational impedence mismatch; object-oriented
concepts can be effectively stored in a relational database with little
to no fuss.
- You can emulate any SQL dialect or SQL features over the QDRDBMS
API, so it is easy to port schemas and applications over, and you can
use the QDRDBMS API as an intermediary for translating from one query
language to another if you desire; either way, QDRDBMS helps you avoid
database vendor lock-in.
If possible, there will be a demo of a simple application that uses
QDRDBMS at the meeting, probably a simple command-line genealogy program.
You can see the QDRDBMS code as it is being developed by looking in
http://darrenduncan.net/QDRDBMS/ ; very few pieces are there as yet,
but more will be added over time, or existing ones changed. Once
QDRDBMS can actually do something useful, and it can start being used
in production, the first version will be uploaded to CPAN as
QDRDBMS-0.001, and put it in a shared version control system too.
Darren also start helping other Perl frameworks on CPAN adapt to
support QDRDBMS for their users, probably starting with DBIx::Class and
going from there.
Questions can be asked at any time during the talk, and the talk can be
customized to things that attendees want to focus on.
(Courtesy copy to VLUG and VOSSOC members by permission of the list
managers. Victoria.pm's home page is .)
--
Peter Scott
Pacific Systems Design Technologies
http://www.perldebugged.com/
http://www.perlmedic.com/
From Peter at PSDT.com Mon Nov 13 18:11:50 2006
From: Peter at PSDT.com (Peter Scott)
Date: Mon, 13 Nov 2006 18:11:50 -0800
Subject: [VPM] override perl-types
In-Reply-To:
References:
Message-ID: <6.2.3.4.2.20061113181056.02919f00@mail.webquarry.com>
At 05:18 PM 11/13/2006, Jer A wrote:
>i have an idea, and I wonder if it can be done using just perl code.
Yes, it can. perldoc perltie.
>I would like to override a datatype, so when that type is
>initialized,acted apon,or manipulated,I can customize the code, and
>store the data in a different way, eg. in memory or whatever.
>
>eg. suppose I want @arr to be actually a concatinated string, but act
>as if it is an array.
>
>I hope you understand what I am getting at.
>I am using perl 5.8.
--
Peter Scott
Pacific Systems Design Technologies
http://www.perldebugged.com/
http://www.perlmedic.com/
From Peter at PSDT.com Mon Nov 20 06:09:00 2006
From: Peter at PSDT.com (Peter Scott)
Date: Mon, 20 Nov 2006 06:09:00 -0800
Subject: [VPM] Perl Mongers Meeting tomorro
Message-ID: <6.2.3.4.2.20061113180855.02910ba8@mail.webquarry.com>
Victoria.pm will meet at its regular date, time, and place tomorrow at
7:00 pm on Tuesday, November 21, at UVic in ECS (Engineering Computer
Science building) room 660 (see
http://www.uvic.ca/maps/index.html). (There will be no December
meeting. In January we will start visiting the beginning-level topics
that I received requests for.)
Darren Duncan will present a new homegrown RDBMS that is written in
Perl, including and overview of its features and design, featuring a
walkthrough of what implementation code exists so far, and also
hopefully showing a live demo of it in action, if it is far enough
along for that by the meeting date.
This RDBMS is intended to be a complete implementation of a truly
relational DBMS, as introduced by Edgar F. Codd and maintained by Hugh
Darwen and Chris Date. The project is fairly unique and has a
significant number of differences from SQL DBMSs, which are not truly
relational due to either missing some required features or due to
providing certain mis-features.
Note: Some useful background material can be seen at
http://en.wikipedia.org/wiki/Relational_data_model and
http://en.wikipedia.org/wiki/Relational_algebra .
Darren's RDBMS has a Perl 5 implementation named "QDRDBMS", which is
what will be walked through, and a Perl 6 implementation under some
other name will be made shortly thereafter or in parallel; QDRDBMS is
officially a prototype that is deprecated in favor of the longer-term
Perl 6 product.
Some key differences between QDRDBMS and a SQL DBMS are:
- QDRDBMS uses the terminology "relation value" (or "relation"),
"relation variable" (or "relvar"), "tuple" and "attribute" for concepts
analagous to SQL's terminology "rowset", "table", "row" and
"column". A "base relvar" and a "virtual relvar" corresponds to a
"table" and a "view"; "relvar" refers to both. A relational database
is a collection of relvars, analagous to a SQL database being a
collection of tables. An "update" operation includes not only
assignment but SQL's concept of "insert", "update", "delete", etc.
- QDRDBMS uses 2-valued logic rather than 3+-valued logic like SQL;
QDRDBMS does not have a native "null" concept. There are multiple
better alternatives to express "unknown" or "not applicable" etc with
it. In SQL terms, all relation attributes are implicitly not nullable.
- QDRDBMS always uses exceptions to indicate failure, whether due to
attempting a constraint violation or by trying to divide by zero; these
can be explicitly trapped and responded to as the user chooses. SQL
sometimes uses exceptions to indicate failure, and other times it just
returns null values instead.
- QDRDBMS is based on sets, rather than bags as SQL is; a tuple is an
unordered set of attributes, and a relation is an unordered set of tuples.
- All QDRDBMS attributes are named and must have distinct names; SQL
columns are ordered, do not have to be named or can have duplicated names.
- In SQL terms, a relation has an implicit unique key constraint over
all of its attributes, if it doesn't have an explicit key defined. A
truly relational database is always in at least first normal form by
definition.
- A QDRDBMS relation or tuple can validly have zero attributes while
a SQL table or row must always have at least one column.
- A QDRDBMS relation has no hidden attributes like SQL's "row id";
they only contain the attributes that the user defined for them. But
since relations contain no duplicates, you can reliably address their
elements by just their attribute values.
- All QDRDBMS operations involving relations are set-based; there are
no tuple-at-a-time operations like SQL's current-row cursors. (That's
not to say that the database-application gateway can't stream data.)
- QDRDBMS queries are composed of simpler relational operators
composed into arbitrary user-defined expressions, rather than the
monolithic and complicated SQL "select" operators.
- QDRDBMS users have the same kind of flexability with relational
expressions as they do with string or numeric or logical etc
expressions; SQL users typically have different syntax structures for
table or rowset operations than for other data types.
- Each QDRDBMS relational operator returns only distinct results (as
a relation value); the SQL relational operators are inconsistent, with
some returning distinct results by default (eg, "union"), while others
return duplicated results by default (eg, "join"). Note that list
summarizing operations like "sum" and "average" still do the correct
thing, despite the aforementioned being true for list-returning operators.
- QDRDBMS is strongly typed, with all values and variables being of a
specific type, and operators taking and returning values of specific
types. No implicit type casting is done, and all type conversions must
be done explicitly. How strong or weak data types are varies by the
SQL DBMS product, and many of them do implicit casting.
- QDRDBMS has distinct numerical division operators for integers and
fractional numbers, which return integers or fractional numbers
respectively. By contrast, some SQL DBMSs have just one behaviour, and
others will change behaviour semi-arbitrarily depending on what a
loosely typed input looks like; eg, using the number "3.0" may not do
what users expect.
- QDRDBMS data types can be scalar or collection types, arbitrarily
complex; relation or tuple attributes can be composed of either, and
attributes can even be relations or tuples. SQL DBMSs often don't let
you use collection types or tables for table columns, or their
provision is inconsistent.
- QDRDBMS uses the same relational operators for both database
relations and component relations of other data types. SQL requires
you to use different operators for each though they are alike; eg,
"union" vs "multiset union".
- QDRDBMS empowers users to define their own data types and
operators, while only some SQL DBMSs do (as "domains" or "types" or
"stored procedures" etc).
- QDRDBMS operators can be recursive, while only some SQL DBMSs allow this.
- QDRDBMS operators take named arguments rather than positional one,
so there is not only flexability but also consistency between those and
the way relation attributes are used, which is by name.
- Every QDRDBMS data type must have at least equality, inequality
testing and value assignment operators, even if it has no other
operators. SQL does not require data types to have equality test
operators or be assignable.
- Fundamentally, the only QDRDBMS update operator is the assignment
operator, and any other update operators (eg, "INSERT", "UPDATE",
"DELETE") are just short-hand for particular assignment expressions.
SQL doesn't provide assignment syntax for many of its update operators.
- All QDRDBMS data types are immutable; you only update variables as
a whole, sometimes by assigning a new value derived from the old value
but for the intended mutation. Short-hand operators can make it appear
different though.
- All QDRDBMS database constraints are immediate, and are applied
between statement boundaries at all levels of granularity. A SQL DBMS
constraint can be defined as deferrable, such as only being applied at
a transaction commit time. So QDRDBMS guarantees that no query or
expression will ever see a database that is in an inconsistent /
constraint-violating state.
- QDRDBMS empowers you to perform multiple variable assignments
simultaneously (which is syntactic sugar for a single assignment to the
database as a whole), so a multi-part change can be made without
tripping the immediate constraints. So it is possible to, eg, record a
credit to one bank account and a corresponding debit to another,
without the database being in an inconsistent state "between" those two
actions.
- Where at all possible, there is no distinction between base and
virtual relvars from the database user's point of view; they can query
and assign to / update either. The database can be redefined so that
some base and virtual relvars become virtual and base, and the user can
continue like nothing had changed.
- QDRDBMS provides system catalog relvars sufficient to completely
describe the entire database. This is analagous to the "information
schema" concept of SQL, which some SQL DBMSs provide and others
don't. The QDRDBMS catalog is AST-based, though, in that the various
details are atomic, rather than consisting of literal SQL strings for
many parts (eg, procedure definitions).
- The catalog relvars of QDRDBMS are user-updateable; where by
contrast they are read-only in SQL. In fact, updating the QRDBMS
system catalog using DML operations is the fundamental means to perform
data definition or schema changes to a database. While QDRDBMS
provides short-hands to this, analagous to SQL's "create table" or
"alter table" etc, they are short-hand. SQL's "create" etc statements
are the only normal way to do schema changes, and they are a lot less flexible.
- QDRDBMS supports arbitrary depth child transactions, where any
statements within a transaction level are collectively atomic and can
succeed or fail. Any statement block marked as atomic, and all named
routines and try-catch blocks are atomic; in the last case, a thrown
exception indicates a failure of the block. Individual statements are
implicitly the trivial case of a transaction.
- QDRDBMS is implicitly in auto-commit mode by default, where each
statement commits immediately on success. Defining an explicit
transaction is effectively just making a larger atomic statement. This
is the most consistent way of doing things when you support child
transactions. Some SQL DBMSs are this way too by default, while others
require explicit "commit" statements for anything to persist even when
no explicit transaction is started.
- In QDRDBMS, all operations are subject to transactions, including
updates of the schema itself, and can be rolled back. Some SQL DBMSs
(such as MySQL) implicitly commit certain operations even if in an
explicit transaction.
- The native query language of QDRDBMS is an AST, though a
string-query allowing wrapper is provided too, so users don't have to
generate query strings like with SQL, and they don't have to worry
about escaping, so there are no injection vulnerabilities with the AST
like with SQL.
- The QDRDBMS API is designed to be easily wrappable with alternate
or simplified interfaces that the users choose. It is a lot harder to
wrap a SQL DBMS.
- QDRDBMS is fully ACID compliant, while some SQL DBMSs are not.
Some key advantages of QDRDBMS over a SQL DBMS are:
- Better language consistency.
- Language ambiguity is removed.
- Language is better huffman coded.
- Language is a lot more flexible and capable.
- Different queries that are logically identical will return the same result.
- Queries are easier for the DBMS to optimize.
- The product is orders of magnitude easier to implement, providing
more functionality and reliability with a smaller footprint and less
complexity.
- There is no object/relational impedence mismatch; object-oriented
concepts can be effectively stored in a relational database with little
to no fuss.
- You can emulate any SQL dialect or SQL features over the QDRDBMS
API, so it is easy to port schemas and applications over, and you can
use the QDRDBMS API as an intermediary for translating from one query
language to another if you desire; either way, QDRDBMS helps you avoid
database vendor lock-in.
If possible, there will be a demo of a simple application that uses
QDRDBMS at the meeting, probably a simple command-line genealogy program.
You can see the QDRDBMS code as it is being developed by looking in
http://darrenduncan.net/QDRDBMS/ ; very few pieces are there as yet,
but more will be added over time, or existing ones changed. Once
QDRDBMS can actually do something useful, and it can start being used
in production, the first version will be uploaded to CPAN as
QDRDBMS-0.001, and put it in a shared version control system too.
Darren also start helping other Perl frameworks on CPAN adapt to
support QDRDBMS for their users, probably starting with DBIx::Class and
going from there.
Questions can be asked at any time during the talk, and the talk can be
customized to things that attendees want to focus on.
(Courtesy copy to VLUG and VOSSOC members by permission of the list
managers. Victoria.pm's home page is .)
--
Peter Scott
Pacific Systems Design Technologies
http://www.perldebugged.com/
http://www.perlmedic.com/