Ignore:
Timestamp:
2004-05-28T16:47:01+12:00 (20 years ago)
Author:
mdewsnip
Message:

Even more improvements to the workspace and collection trees. These are now hugely improved -- with refreshing being much quicker and much more reliable.

File:
1 edited

Legend:

Unmodified
Added
Removed
  • trunk/gli/src/org/greenstone/gatherer/file/FileQueue.java

    r7482 r7491  
    528528                // Don't worry about all this for true file move actions.
    529529                if(job.type == FileJob.DELETE) {
    530                    // queue all of its children, (both filtered and non-filtered), but for deleting only. Don't queue jobs for a current move event, as they would be queued as part of copying. I have no idea way, per sec, however the children within the origin node are always invalid during deletion (there are several copies of some nodes?!?). I'll check that each child is only added once.
    531                    ///ystem.err.println("Directory has " + origin_node.getChildCount() + " children.");
     530                // queue all of its children, (both filtered and non-filtered), but for deleting only. Don't queue jobs for a current move event, as they would be queued as part of copying. I have no idea way, per sec, however the children within the origin node are always invalid during deletion (there are several copies of some nodes?!?). I'll check that each child is only added once.
     531                ///ystem.err.println("Directory has " + origin_node.getChildCount() + " children.");
    532532                ///ystem.err.println("Directory actually has " + child_list.length + " children.");
    533                    origin_node.unmap();
    534                    origin_node.map();
    535                    ///atherer.println("Directory has " + origin_node.getChildCount() + " children.");
    536                    ///atherer.println("Directory actually has " + child_list.length + " children.");
    537                    for(int i = 0; i < origin_node.size(); i++) {
    538                    FileNode child_record = (FileNode) origin_node.get(i);
    539                    ///atherer.println("Queuing: " + child_record);
    540                    addJob(job.ID(), job.source, child_record, job.target, destination_node, FileJob.DELETE, job.undo, false, false, position);
    541                    //if(recycle_folder_record != null) {
    542                    //   recycle_folder_mappings.put(child_record, recycle_folder_record);
    543                    //}
    544                    } 
     533                origin_node.refresh();
     534                ///atherer.println("Directory has " + origin_node.getChildCount() + " children.");
     535                ///atherer.println("Directory actually has " + child_list.length + " children.");
     536                for(int i = 0; i < origin_node.size(); i++) {
     537                    FileNode child_record = (FileNode) origin_node.get(i);
     538                    ///atherer.println("Queuing: " + child_record);
     539                    addJob(job.ID(), job.source, child_record, job.target, destination_node, FileJob.DELETE, job.undo, false, false, position);
     540                    //if(recycle_folder_record != null) {
     541                    //  recycle_folder_mappings.put(child_record, recycle_folder_record);
     542                    //}
     543                } 
    545544                }
    546545                // Requeue a delete job -after- the children have been dealt with. Remember I've reversed the direction of the queue so sooner is later. Te-he. Also have to remember that we have have followed this path to get here for a move job: Copy Directory -> Queue Child Files -> Delete Directory (must occur after child files) -> Queue Directory.
Note: See TracChangeset for help on using the changeset viewer.