Skip to content

8uurg/Efficient-Model-Stitching

Folders and files

NameName
Last commit message
Last commit date

Latest commit

6c8f367 · Feb 14, 2025

History

5 Commits
Feb 14, 2025
Feb 14, 2025
Feb 22, 2024
Feb 14, 2025
Feb 14, 2025
Feb 14, 2025
Feb 14, 2025
Feb 14, 2025
Feb 14, 2025
Feb 14, 2025
Feb 22, 2024
Feb 22, 2024
Feb 22, 2024
Feb 22, 2024
Feb 14, 2025
Feb 14, 2025
Feb 14, 2025
Feb 22, 2024
Feb 14, 2025
Feb 14, 2025
Feb 22, 2024
Feb 14, 2025
Feb 22, 2024
Feb 22, 2024
Feb 22, 2024
Feb 14, 2025
Feb 14, 2025
Feb 14, 2025
Feb 22, 2024
Feb 14, 2025
Feb 22, 2024

Repository files navigation

Efficient Model Stitching - Source Code

This repository contains the code pertaining to the work 'Exploring the Search Space of Neural Network Combinations obtained with Efficient Model Stitching' - to be presented at GECCO 2024 - Workshop: Neuroevolution at Work.

Paper | Code (Zenodo - Archived) | Data (Zenodo - Archived)

Usage

  • Clone this repository
  • Set up the conda environment recombnet by executing update_conda_from_yml.sh.
  • On the node running the EAs, C++ code need to be compiled and installed, see EALib/README.md for details.
  • Download the prerequisite datasets (ImageNet, VOC), and ensure they are unpacked into a dataset folder. Note the path to this folder.
  • Update the path to the dataset folder by replacing <add-dataset-folder> with the relevant path.
  • Similarly, ensure the tmp directory is large enough. Many logs may be written, e.g. by tools like ray, causing runs to terminate prematurely if this directory has no space available. If need be, relocate the tmp directory.
  • Download stitched networks from (tbd), or use the *Stitching-Pretrained* notebooks to stitch one yourself.
  • Running the commands from experiment-*.txt files should, given that everything has been set up correctly, perform runs in identical configuration. Output is, due to the asynchronous nature of the EAs employed, not deterministic.
    • Experiments for VOC were ran on a SLURM cluster. The batch file used here sets up a ray cluster for the use of this experiment prior to running the script, if a ray cluster is set up already - only the latter command should be necessary.
    • Experiments for ImageNet require a configured ray cluster to be set up, with the controller node being the node scripts are ran on. As we set up a new cluster for each individual run for SLURM, one may want to refer to 2023-12-25-run-exp-voc-final.sh, or alternatively, refer to the ray documentation.
    • Data should be present on all nodes - not just the main node.
    • Environment needs to be set up on all nodes - not just the main nodes. However, the conda environment alone should suffice here: compiling and installing EAlib may be skipped.
  • After running the experiments 2024-01-02-process-run-data.ipynb can be used to process data from an individual run, and find the approximation front.
  • The solutions on the front will be reevaluated on the test set with the commands in 2024-01-03-reevaluate-cmds.txt (and 2024-01-15-reeval-imagenet-b.txt).
  • Finally, 2024-01-03-process-reeval-data.ipynb can be used to create relevant plots.

Credit

DAEDALUS – Distributed and Automated Evolutionary Deep Architecture Learning with Unprecedented Scalability

This research code was developed as part of the research programme Open Technology Programme with project number 18373, which was financed by the Dutch Research Council (NWO), Elekta, and Ortec Logiqcare.

Project leaders: Peter A.N. Bosman, Tanja Alderliesten Researchers: Alex Chebykin, Arthur Guijt, Vangelis Kostoulas Main code developer: Arthur Guijt

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published