This event is not sent to the object if the drag was not accepted during the
On Drag Over
events. If you process the
On Drag Over
event for an object and reject a drag, the
On Drop
event does not occur. Thus, if during the
On Drag Over
event you have tested the data type compatibility between the source and destination objects and have accepted a possible drop, you do not need to re-test the data during the
On Drop
. You already know that the data is suitable for the destination object.

An interesting aspect of the 4D drag-and-drop implementation is that 4D lets you do whatever you want. Examples:

•
If a hierarchical list item is dropped over a text field, you can insert the text of the list item at the beginning, at the end, or in the middle of the text field.

•
Your form contains a two-state picture button, which could represent an empty or full trash can. Dropping an object onto that button could mean (from the user interface standpoint) "delete the object that has been dragged and dropped into the trash can." Here, the drag and drop does not transport data from one point to another; instead, it performs an action.

•
Dragging an array element from a floating window to an object in a form could mean "in this window, show the Customer record whose name you just dragged and dropped from the floating window listing the Customers stored in the database."

•
And so on.

So, the 4D drag-and-drop interface is a framework which enables you to implement any user interface metaphor you may devise.

When the drag-and-drop operation is intended to copy the dragged data, the functionality of these commands depend on how many processes are involved:

•
If the drag and drop is limited to one process, use these commands to perform the appropriate actions (i.e., simply assigning the source object to the destination object).

•
If the drag and drop is an interprocess drag and drop, you need to be careful while getting access to the dragged data; you must access the data instance from the source process. If the dragged data comes from a variable, use
GET PROCESS VARIABLE
to get the right value. If the dragged data comes from a field, remember that the current record for a table is probably different for the two processes, so you need to access the right record.

Menu

ElasticSearch is a great tool for full-text search over billions of records. But what if you want to search through files with help of ElastricSearch? How should you extract and index files? After googling for
"ElasticSearch searching PDFS"
,
"ElasticSearch index binary files"
I didn't find any suitable solution, so I decided to make this post about available options.

Ingest Attachment Plugin

The simplest and easy to use solution is Ingest Attachment. It's a plugin for ElasticSearch that extracts content from almost all document types (thanks Tika). It's a good choice for a quick start. Ingest Attachment can't be fine tuned, and that's why it can't handle large files. We post about pitfalls of Ingest Attachment before, read it . Installation process is straightforward, check out official ElasticSearch site for details.

Apache Tika is a de-facto standard for extracting content from files. Roughly speaking, Tika is a combination of open-source libraries that extract files content, joined into a single library. It's open source and it has a REST API. You have to be experienced to setup and configure it on your server. For example, I had issues with setting up Tesseract to do OCR inside Tika. Also you should notice that Tika doesn't work well with some kinds of PDFs (the ones with images inside) and REST API works much slower than direct Java calls, even on localhost.

So, you installed Tika, what's next? You need to create some kind of wrapper that:

To make ElasticSearch search fast through large files you have to tune it yourself. Details in and posts.

To sum up, Tika is a great solution but it requires a lot of code-writing and fine-tuning, especially for edge cases: for Tika it's weird PDF's and OCR.

FsCrawler is a "quick and dirty" open-source solution for those who wants to index documents from their local filesystem and over SSH. It crawls your filesystem and indexes new files, updates existing ones and removes old ones. FsCrawler is written in Java and requires some additional work to install and configure it. It supports scheduled crawling (e.g. every 15 minutes), also it has some basic API for submitting files and schedule management. FsCrawler uses Tika inside, and generally speaking you can use FsCrawler as a glue between Tika and ElasticSearch.