I use Scribe to generate a Word and Excel document for a biweekly report. The template documents are saved within container fields in the relevant database, and are exported to a location on the client hard drive for reading/writing via Scribe.
I have been contemplating migrating our Scribe plugin onto our FM server instead of my client, which would alleviate a few rare but pesky problems. Generated Word or Excel documents would be treated the same way, with the additional step of just emailing the final files to myself via SMTP.
But our server (Windows Server 2008 R2 Enterprise VM) currently does not have Word or Excel installed. Perhaps this is a dumb question, but do those programs need to be installed on that server in order for Scribe to be able to read and/or write to .docx or .xlsx files?

I am using the MasterDetail layout from Todd Geist in a couple of places in our database. By default this layout loads the first 2000 records into the MasterDetail portal. In order to save time and bandwidth I've added some logic to perform a find for only active records on the first navigation to the layout. This OnLayoutEnter behavior is only done for the first navigation to the layout, because I don't want it to interfere with any found sets the user may have isolated on subsequent navigations. I've also added an OnModeEnter script trigger to detect entry into Browse Mode, which successfully squelches this auto-find behavior in the event a user navigates to the layout in find mode, and then performs a find.
The logic looks similar to this:
So far so good. This works okay. The remaining issue, however, is that our users sometimes want to make snapshot links to these layouts. Unfortunately I don't know how to detect the layout is opening for the first time in a session from a snapshot link. As a result, the auto-find overrides the snapshot link found set.
Is there any way around this? A way to detect opening from a snapshot link, or perhaps some superior logic to approaching what I'm trying to do here that isn't prone to this problem?

​So, basically, you should just always use Refresh Portal whether it's a portal or not for the most flexible code. That's just...counter-intuitive and weird. All portals are objects, not all objects are portals. These coding options are bass ackwards. Gotta agree with the above sentiments, why in the world would they have gone this route rather than just updating the Refresh Object step? I mean, I guess it gives more visibility to the ability to properly refresh a portal in this release, but a nice bullet point regarding updated Refresh Object functionality with portals would have worked as well!

I just watched the Soliant Consulting video giving some information on the new Refresh Portal script step. I have to wonder...what is the difference between this script step and the Refresh Object script step? Didn't Refresh Object do exactly the same thing if you specified the portal object name? Is there some performance benefit to using Refresh Portal?

​These are good points, and the kinds of things that I need to be mindful of. I'm not able to devote all my time to development so I appreciate the warnings for potential pitfalls. I haven't dug into this problem just yet as I'm on a deadline for some other things at the moment, but if I get some time to experiment with this perhaps I'll work on a sample file first. If it works well I'll post it here. Otherwise I'll just use multiple VL tables. Thanks again.

Thanks all for the input! Sounds like there are indeed a lot of variations on how to approach this scenario. And lots of fancy ideas here to experiment with! Really like the idea of using a category field, I think I could see how I could get that to work. And @LaRetta thanks for the detailed examples! It's quite a bit different setup from how I use my virtual table, but there are some great ideas in here that will undoubtedly be helpful to have in my databanks (for example, I really like that automatically expanding virtual table should it not have enough records, haven't seen that before). Since there are so many ways to approach it, I will likely approach it either as multiple VL tables or my original idea of converting my VL table to repeating fields. Multiple VL tables is pretty easy, but for some reason feels redundant to me. Maybe it sounds clunky to some but for some reason it makes a lot of sense in my head to just use the one virtual table with repeating fields, assuming the performance wouldn't be dramatically different from other methods.

I am interested in rendering multiple virtual lists on the same layout at the same time. A couple methods have occurred to me, but the method that seems easiest to implement would be to simply use repeating fields for my Virtual List table, and making sure to reference the correct repetitions in the scripting and on the relevant portals. Seems like that way you could have as many virtual lists on a layout as you have repetitions for your fields. I have no idea if this is a bad idea or a good idea, if it negatively impacts performance compared to other techniques such as having multiple virtual list tables, etc. Is there a widely accepted technique that is considered the best way to render multiple virtual lists on the same layout? Or is my idea as good as any? Any advice here is appreciated!

​Yeah, that's the only thing I could figure. This table makes up the vast majority of our file size, so I had basically double the data consumption that I should have for the solution and its data set. I don't understand how or why our system got like that, and I don't fully understand why the solution worked, but I'm delighted I found a solution to the problem of our swollen FM file. Particularly one I could run as a script. I had nightmares about going through every single container, three per record, across 10,500 records, as a manual process to fix the problem.

In case anyone cares, after trying a huge number of things, I finally found something that worked. Not only did it work, it cut my FM file size down from 7ish GB to around 700MB! With all the problems with container fields in this particular table, it seemed unlikely that I'd be able to track down and fix every issue manually. However, I did note that even these files where Export Field Contents were greyed out, they still had a file in the external folder. I found that if I copied the file from the external folder, then dropped it back into the container, it appeared to fix the problem. The item could be exported again. So I basically just copied the entire external folder, then wrote a script to replace all containers with the correct referenced file. I don't know why this works, but it works. Presumably something went wrong with a bunch of these containers when this table was being built I guess. In any case, you can't argue with the results. An order of magnitude decrease in file size of my FM file surely indicates I was barking up the right tree on this one. I can now export without crashing, and all that good stuff as well.

other notes:
for the containers that won't export, it appears that the folder containing the files does actually contain the file, even though I can't export it.
Cutting the image and pasting it back does not change the problem; still can't export.
Dragging and dropping from the problem container to a test container with a different external path appears to duplicate the problem. The option to export contents remains grayed out in the new container.
Replace field contents does not work (in fact it appears to cause an eventual crash of FM much like exporting the containers does if you try to do it for the whole table).
Changing the container storage path does not make the images able to be exported either.
So far, the only thing that seems to do anything at all is to completely replace the image with a new one.

Here is an example of what I get from one of these containers:
remote:TB15-283 72310.bmp
size:1350,519
JPEG:Purifications/file_one/TB15-283 72310.jpg
It looks completely normal in this respect it seems. But it just...won't export. And yes, this field is set to be stored externally.
I have some more clues I might share. This table is currently roughly 10500 records. A couple of years ago there was a schema redesign wherein two different tables were merged into this same table.
I've found there is a stretch of about 1200 or so records where the vast majority of these containers are like this, starting at around record 800. The first 2000 records in this table were imported from one specific table, so it appears that something may have gone wrong with the import process at around 800 records through the rest of this table, which did not return an error of any kind.
There's other interesting things happening here. Before I read your advice about adopting a particular cure before being sure of the diagnosis, I replaced a couple hundred of these items with their screenshot versions (honestly they're just trace chromatograms, you can hardly tell the difference in quality). Looking at the file size for the FM file, it appears to have gone down by about 600MB! From 6.9GB to 6.3GB. So it definitely seems this is at least part of the problem of my swollen FM file.
But it's going to suck to brute force all remaining 1000 of these records in this way. And there could be random ones here and there that I don't know about (I don't know how to find them yet, outside of this block of known problem records). I am preparing to try a few other things, such as doing a Replace Field Contents to another container with a different path, see if that does anything worthwhile to these (I already know exports fail). Or perhaps redoing the container path, to see if that does anything of note.
Finally, I've found some containers that appear to have 2 files in the GetAsText info?
remote:Picture 2.png
size:372,189
JPEG:Purifications/file_one/Picture 2_1.jpg
PNGf:Purifications/file_one/Picture 2_1.png
It's worth noting that these are *not* items that I've replaced with a screenshot. Doing that replaces the original entirely it seems. So where/why/how did that happen, and are there actually two files residing there somehow?
This just gets weirder and weirder!
In any case, I am happy to entertain any other suggestions as to what I might try with these, outside of trying to re-define the container path, doing a replace field contents to another container with a different path, or just brute force screenshotting all I can find (ultimately may still not work since there may be other records like this that I can't find). This is pretty uncharted territory for me, so please share if you have some ideas!

Thanks comment. This sent me in a direction toward solving the problem I think.
I've begun suspecting that there's problems with some of my container data. So I've taken a particular table that has 3 containers, and tried exporting the id and all the containers into a FM file. I find that FM crashes after a certain point. So I then go into the exported file, omit all records that did not import an id, and go to the last record and get the id. I then do a find on that id in the live database, show all, and flip to the next record. This should show me the record that caused the import to crash. So, the container on this record appears to show an image, but when I right click to export the data I find 'Export Field Contents' is greyed out. This appears to be at least part of the problem?
I've found that I can fix the record by taking a screenshot of the image, and then loading the screenshot into the container instead of whatever thing was there. Export Field Content is no longer greyed, and that record exports fine on the next attempt. The problem is, there are a significant number of records that have this problem within one particular stretch of records, but I have no idea how many. Is there any container calculation I might use to isolate these records?

I have tried that method. At the time I did it which was a couple years ago running FM12, it shaved about 500MB off the file size. The file was still nearly as big as the folder containing all container data. Perhaps I should try it again? /shrug