Author: 13l50qfydqlo

  • fit-sport-modifier

    About

    Command line tool for modifying sport in .fit files.

    Usage of ./fit-sport-modifier [options] in.fit [out.fit]
    
    Show current sport fields:
    ./fit-sport-modifier in.fit
    
    Replace sport name:
    ./fit-sport-modifier -name "XC Skate Ski" ./in.fit ./out.fit
    
    Replace sport name and sub sport code:
    ./fit-sport-modifier -subsport 42 -name "XC Skate Ski" ./in.fit ./out.fit
    
    Options:
      -name string
            new sport name
      -sport int
            new sport code (default -1)
      -subsport int
            new sub sport code (default -1)

    Built on top of the github.com/muktihari/fit package.

    Available sport and sub sport values

    Taken from Garmin Fit SDK.

    Sports

    name value
    generic 0
    running 1
    cycling 2
    transition 3
    fitness_equipment 4
    swimming 5
    basketball 6
    soccer 7
    tennis 8
    american_football 9
    training 10
    walking 11
    cross_country_skiing 12
    alpine_skiing 13
    snowboarding 14
    rowing 15
    mountaineering 16
    hiking 17
    multisport 18
    paddling 19
    flying 20
    e_biking 21
    motorcycling 22
    boating 23
    driving 24
    golf 25
    hang_gliding 26
    horseback_riding 27
    hunting 28
    fishing 29
    inline_skating 30
    rock_climbing 31
    sailing 32
    ice_skating 33
    sky_diving 34
    snowshoeing 35
    snowmobiling 36
    stand_up_paddleboarding 37
    surfing 38
    wakeboarding 39
    water_skiing 40
    kayaking 41
    rafting 42
    windsurfing 43
    kitesurfing 44
    tactical 45
    jumpmaster 46
    boxing 47
    floor_climbing 48
    baseball 49
    diving 53
    hiit 62
    racket 64
    wheelchair_push_walk 65
    wheelchair_push_run 66
    meditation 67
    disc_golf 69
    cricket 71
    rugby 72
    hockey 73
    lacrosse 74
    volleyball 75
    water_tubing 76
    wakesurfing 77
    mixed_martial_arts 80
    snorkeling 82
    dance 83
    jump_rope 84

    Sub sports

    name value comment
    generic 0
    treadmill 1 Run/Fitness Equipment
    street 2 Run
    trail 3 Run
    track 4 Run
    spin 5 Cycling
    indoor_cycling 6 Cycling/Fitness Equipment
    road 7 Cycling
    mountain 8 Cycling
    downhill 9 Cycling
    recumbent 10 Cycling
    cyclocross 11 Cycling
    hand_cycling 12 Cycling
    track_cycling 13 Cycling
    indoor_rowing 14 Fitness Equipment
    elliptical 15 Fitness Equipment
    stair_climbing 16 Fitness Equipment
    lap_swimming 17 Swimming
    open_water 18 Swimming
    flexibility_training 19 Training
    strength_training 20 Training
    warm_up 21 Tennis
    match 22 Tennis
    exercise 23 Tennis
    challenge 24
    indoor_skiing 25 Fitness Equipment
    cardio_training 26 Training
    indoor_walking 27 Walking/Fitness Equipment
    e_bike_fitness 28 E-Biking
    bmx 29 Cycling
    casual_walking 30 Walking
    speed_walking 31 Walking
    bike_to_run_transition 32 Transition
    run_to_bike_transition 33 Transition
    swim_to_bike_transition 34 Transition
    atv 35 Motorcycling
    motocross 36 Motorcycling
    backcountry 37 Alpine Skiing/Snowboarding
    resort 38 Alpine Skiing/Snowboarding
    rc_drone 39 Flying
    wingsuit 40 Flying
    whitewater 41 Kayaking/Rafting
    skate_skiing 42 Cross Country Skiing
    yoga 43 Training
    pilates 44 Fitness Equipment
    indoor_running 45 Run
    gravel_cycling 46 Cycling
    e_bike_mountain 47 Cycling
    commuting 48 Cycling
    mixed_surface 49 Cycling
    navigate 50
    track_me 51
    map 52
    single_gas_diving 53 Diving
    multi_gas_diving 54 Diving
    gauge_diving 55 Diving
    apnea_diving 56 Diving
    apnea_hunting 57 Diving
    virtual_activity 58
    obstacle 59 Used for events where participants run,
    breathing 62
    sail_race 65 Sailing
    ultra 67 Ultramarathon
    indoor_climbing 68 Climbing
    bouldering 69 Climbing
    hiit 70 High Intensity Interval Training
    amrap 73 HIIT
    emom 74 HIIT
    tabata 75 HIIT
    pickleball 84 Racket
    padel 85 Racket
    indoor_wheelchair_walk 86
    indoor_wheelchair_run 87
    indoor_hand_cycling 88
    squash 94
    badminton 95
    racquetball 96
    table_tennis 97
    fly_canopy 110 Flying
    fly_paraglide 111 Flying
    fly_paramotor 112 Flying
    fly_pressurized 113 Flying
    fly_navigate 114 Flying
    fly_timer 115 Flying
    fly_altimeter 116 Flying
    fly_wx 117 Flying
    fly_vfr 118 Flying
    fly_ifr 119 Flying


    Visit original content creator repository
    https://github.com/IvanSafonov/fit-sport-modifier

  • serverless-appsync-lambda-httpresource-example

    serverless-appsync-lambda-httpresource-example CircleCI

    This sample repository shows how to setup AWS AppSync that exposes two GraphQL queries:

    • getWeatherWithHTTPResource which gets weather information from https://wttr.in using a HTTP Resource
    • getWeatherWithLambda which gets weather information from https://wtter.in using a Lambda which then executes the request

    This repository also shows different ways of testing a AWS AppSync:

    • At unit level for the lambda handler defined
    • At the mapping template level, by testing directly the VTL defined maps with @conduitvc/appsync-emulator-serverless/vtl
    • At AppSync level using the helper createAppSync available in @conduitvc/appsync-emulator-serverless/jest

    Notes:

    • Created dynamodb-local.js to start DynamoDB locally before we run the tests so the tests don’t timeout since DynamoDB takes a while to start for the first time
    • Created jest-utils to provide utils for testing the VTL files, loadVTL which will load the VTL file and renderVTL which will try to render the VTL provided with the function vtl available ij @conduitvc/appsync-emulator-serverless/vtl

    Tech stack

    Serverless

    https://serverless.com/

    The Serverless Framework is an open-source CLI for building and deploying serverless applications. With over 6 million deployments handled, the Serverless Framework is the tool developers trust to build cloud applications.

    Build Setup

    Using Docker

    # Build Dockerfile
    $ yarn docker:build
    
    # graphql will run on http://localhost:62222/graphql
    $ yarn docker:dev
    
    # Running tests
    $ yarn docker:test
    
    # Running tests with watch
    $ yarn docker:test:dev

    Running locally

    # install dependencies
    $ yarn
    
    # graphql will run on http://localhost:62222/graphql
    $ yarn run dev

    Testing

    curl 'http://localhost:62222/graphql' -H 'Accept-Encoding: gzip, deflate, br' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Connection: keep-alive' -H 'DNT: 1' -H 'Origin: http://localhost:3001' -H 'x-api-key: ABC123' --data-binary '{"query":"{ getWeatherWithHTTPResource }"}' --compressed
    curl 'http://localhost:62222/graphql' -H 'Accept-Encoding: gzip, deflate, br' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Connection: keep-alive' -H 'DNT: 1' -H 'Origin: http://localhost:3001' -H 'x-api-key: ABC123' --data-binary '{"query":"{ getWeatherWithLambda }"}' --compressed
    Visit original content creator repository https://github.com/davidpicarra/serverless-appsync-lambda-httpresource-example
  • scripts

    Scripts

    Simple and short programs to solve small problems

    A collection of small programs written in scripting languages to perform simple tasks.

    Languages used

    Perl

    Perl Perl is a language for getting your job done. Of course, if your job is programming, you can get your job done with any “com- plete” computer language, theoretically speaking. But we know from experience that computer languages differ not so much in what they make possible, but in what they make easy. At one extreme, the so-called “fourth generation languages” make it easy to do some things, but nearly impossible to do other things. At the other extreme, so-called “industrial-strength” languages make it equally difficult to do almost everything. Perl is different. In a nutshell, Perl is designed to make the easy jobs easy, without making the hard jobs impossible.

    Raku

    Camelia, the Raku Bug Hi, my name is Camelia. I’m the spokesbug for Raku. Raku intends to carry forward the high ideals of the Perl community. Raku has been developed by a team of dedicated and enthusiastic volunteers, and continues to be developed. You can help too. The only requirement is that you know how to be nice to all kinds of people (and butterflies). Go to #raku and someone will be glad to help you get started.

    Awk

    Computer users spend a lot of time doing simple, mechanical data manipula- tion – changing the format of data, checking its validity, finding items with some property, adding up numbers, printing reports, and the like. All of these jobs ought to be mechanized, but it’s a real nuisance to have to write a special- purpose program in a standard language like C each time such a task comes up. Awk is a programming language that makes it possible to handle such tasks with very short programs, often only one or two lines long. An awk program is a sequence of patterns and actions that tell what to look for in the input data and what to do when it’s found. Awk searches a set of files for lines matched by any of the patterns; when a matching line is found, the corresponding action is performed. A pattern can select lines by combinations of regular expressions and comparison operations on strings, numbers, fields, variables, and array ele- ments. Actions may perform arbitrary processing on selected lines; the action language looks like C but there are no declarations, and strings and numbers are built-in data types.

    Visit original content creator repository https://github.com/DaviNakamuraCardoso/scripts
  • scripts

    Scripts

    Simple and short programs to solve small problems

    A collection of small programs written in scripting languages to perform simple tasks.

    Languages used

    Perl

    Perl Perl is a language for getting your job done. Of course, if your job is programming, you can get your job done with any “com- plete” computer language, theoretically speaking. But we know from experience that computer languages differ not so much in what they make possible, but in what they make easy. At one extreme, the so-called “fourth generation languages” make it easy to do some things, but nearly impossible to do other things. At the other extreme, so-called “industrial-strength” languages make it equally difficult to do almost everything. Perl is different. In a nutshell, Perl is designed to make the easy jobs easy, without making the hard jobs impossible.

    Raku

    Camelia, the Raku Bug Hi, my name is Camelia. I’m the spokesbug for Raku. Raku intends to carry forward the high ideals of the Perl community. Raku has been developed by a team of dedicated and enthusiastic volunteers, and continues to be developed. You can help too. The only requirement is that you know how to be nice to all kinds of people (and butterflies). Go to #raku and someone will be glad to help you get started.

    Awk

    Computer users spend a lot of time doing simple, mechanical data manipula- tion – changing the format of data, checking its validity, finding items with some property, adding up numbers, printing reports, and the like. All of these jobs ought to be mechanized, but it’s a real nuisance to have to write a special- purpose program in a standard language like C each time such a task comes up. Awk is a programming language that makes it possible to handle such tasks with very short programs, often only one or two lines long. An awk program is a sequence of patterns and actions that tell what to look for in the input data and what to do when it’s found. Awk searches a set of files for lines matched by any of the patterns; when a matching line is found, the corresponding action is performed. A pattern can select lines by combinations of regular expressions and comparison operations on strings, numbers, fields, variables, and array ele- ments. Actions may perform arbitrary processing on selected lines; the action language looks like C but there are no declarations, and strings and numbers are built-in data types.

    Visit original content creator repository https://github.com/DaviNakamuraCardoso/scripts
  • dataset-uta4-rates

    UTA4: Rates Dataset

    License: AGPL v3 Last commit OpenCollective OpenCollective Gitter Twitter

    Several datasets are fostering innovation in higher-level functions for everyone, everywhere. By providing this repository, we hope to encourage the research community to focus on hard problems. In this repository, we present our severity rates (BIRADS) of clinicians while diagnosing several patients from our User Tests and Analysis 4 (UTA4) study. Here, we provide a dataset for the measurements of severity rates (BIRADS) concerning the patient diagnostic. Work and results are published on a top Human-Computer Interaction (HCI) conference named AVI 2020 (page). Results were analyzed and interpreted from our Statistical Analysis charts. The user tests were made in clinical institutions, where clinicians diagnose several patients for a Single-Modality vs Multi-Modality comparison. For example, in these tests, we used both prototype-single-modality and prototype-multi-modality repositories for the comparison. On the same hand, the hereby dataset represents the pieces of information of both BreastScreening and MIDA projects. These projects are research projects that deal with the use of a recently proposed technique in literature: Deep Convolutional Neural Networks (CNNs). From a developed User Interface (UI) and framework, these deep networks will incorporate several datasets in different modes. For more information about the available datasets please follow the Datasets page on the Wiki of the meta information repository. Last but not least, you can find further information on the Wiki in this repository. We also have several demos to see in our YouTube Channel, please follow us.

    Citing

    We kindly ask scientific works and studies that make use of the repository to cite it in their associated publications. Similarly, we ask open-source and closed-source works that make use of the repository to warn us about this use.

    You can cite our work using the following BibTeX entry:

    @inproceedings{10.1145/3399715.3399744,
    author = {Calisto, Francisco Maria and Nunes, Nuno and Nascimento, Jacinto C.},
    title = {BreastScreening: On the Use of Multi-Modality in Medical Imaging Diagnosis},
    year = {2020},
    isbn = {9781450375351},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3399715.3399744},
    doi = {10.1145/3399715.3399744},
    abstract = {This paper describes the field research, design and comparative deployment of a multimodal medical imaging user interface for breast screening. The main contributions described here are threefold: 1) The design of an advanced visual interface for multimodal diagnosis of breast cancer (BreastScreening); 2) Insights from the field comparison of Single-Modality vs Multi-Modality screening of breast cancer diagnosis with 31 clinicians and 566 images; and 3) The visualization of the two main types of breast lesions in the following image modalities: (i) MammoGraphy (MG) in both Craniocaudal (CC) and Mediolateral oblique (MLO) views; (ii) UltraSound (US); and (iii) Magnetic Resonance Imaging (MRI). We summarize our work with recommendations from the radiologists for guiding the future design of medical imaging interfaces.},
    booktitle = {Proceedings of the International Conference on Advanced Visual Interfaces},
    articleno = {49},
    numpages = {5},
    keywords = {user-centered design, multimodality, medical imaging, human-computer interaction, healthcare systems, breast cancer, annotations},
    location = {Salerno, Italy},
    series = {AVI '20}
    }
    

    Table of contents

    Prerequisites

    The following list is showing the required dependencies for this project to run locally:

    • Git or any other Git or GitHub version control tool
    • Python (3.5 or newer)

    Here are some tutorials and documentation, if needed, to feel more comfortable about using and playing around with this repository:

    Usage

    Usage follow the instructions here to setup the current repository and extract the present data. To understand how the hereby repository is used for, read the following steps.

    Installation

    At this point, the only way to install this repository is manual. Eventually, this will be accessible through pip or any other package manager, as mentioned on the roadmap.

    Nonetheless, this kind of installation is as simple as cloning this repository. Virtually all Git and GitHub version control tools are capable of doing that. Through the console, we can use the command below, but other ways are also fine.

    git clone https://github.com/MIMBCD-UI/dataset-uta4-rates.git

    Optionally, the module/directory can be installed into the designated Python interpreter by moving it into the site-packages directory at the respective Python directory.

    Demonstration

    Please, feel free to try out our demo. It is a script called demo.py at the src/ directory. It can be used as follows:

    python src/demo.py

    Just keep in mind this is just a demo, so it does nothing more than downloading data to an arbitrary destination directory if the directory does not exist or does not have any content. Also, we did our best to make the demo as user-friendly as possible, so, above everything else, have fun! 😁

    Roadmap

    CII Best Practices

    We need to follow the repository goal, by addressing the thereby information. Therefore, it is of chief importance to scale this solution supported by the repository. The repository solution follows the best practices, achieving the Core Infrastructure Initiative (CII) specifications.

    Besides that, one of our goals involves creating a configuration file to automatically test and publish our code to pip or any other package manager. It will be most likely prepared for the GitHub Actions. Other goals may be written here in the future.

    Contributing

    This project exists thanks to all the people who contribute. We welcome everyone who wants to help us improve this downloader. As follows, we present some suggestions.

    Issuer

    Either as something that seems missing or any need for support, just open a new issue. Regardless of being a simple request or a fully-structured feature, we will do our best to understand them and, eventually, solve them.

    Developer

    We like to develop, but we also like collaboration. You could ask us to add some features… Or you could want to do it yourself and fork this repository. Maybe even do some side-project of your own. If the latter ones, please let us share some insights about what we currently have.

    Information

    The current information will summarize important items of this repository. In this section, we address all fundamental items that were crucial to the current information.

    Related Repositories

    The following list, represents the set of related repositories for the presented one:

    Dataset Resources

    To publish our datasets we used a well known platform called Kaggle. To access our project’s Profile Page just follow the link. For the purpose, three main resources uta4-singlemodality-vs-multimodality-nasatlx, uta4-sm-vs-mm-sheets and uta4-sm-vs-mm-sheets-nameless are published in this platform. Moreover, the Single-Modality vs Multi-Modality is available in our MIMBCD-UI Project page on data.world. Last but not least, datasets are also published at figshare and OpenML platforms.

    License & Copyright

    Copyright © 2020 Instituto Superior Técnico

    Creative Commons License

    The dataset-uta4-rates repository is distributed under the terms of GNU AGPLv3 license and CC-BY-SA-4.0 copyright. Permissions of this license are conditioned on making available complete elements from this repository of licensed works and modifications, which include larger works using a licensed work, under the same license. Copyright and license notices must be preserved.

    Team

    Our team brings everything together sharing ideas and the same purpose, developing even better work. In this section, we will nominate the full list of important people for this repository, as well as respective links.

    Authors

    Promoters

    • Hugo Lencastre
    • Nádia Mourão
    • Bruno Dias
    • Bruno Oliveira
    • Luís Ribeiro Gomes
    • Carlos Santiago

    Acknowledgements

    This work was partially supported by national funds through FCT and IST through the UID/EEA/50009/2013 project, BL89/2017-IST-ID grant. We thank Dr. Clara Aleluia and her radiology team of HFF for valuable insights and helping using the Assistant on their daily basis. From IPO-Lisboa, we would like to thank the medical imaging teams of Dr. José Carlos Marques and Dr. José Venâncio. From IPO-Coimbra, we would like to thank the radiology department director and the all team of Dr. Idílio Gomes. Also, we would like to provide our acknowledgments to Dr. Emília Vieira and Dr. Cátia Pedro from Hospital Santa Maria. Furthermore, we want to thank all team from the radiology department of HB for participation. Last but not least, a great thanks to Dr. Cristina Ribeiro da Fonseca, who among others is giving us crucial information for the BreastScreening project.

    Supporting

    Our organization is a non-profit organization. However, we have many needs across our activity. From infrastructure to service needs, we need some time and contribution, as well as help, to support our team and projects.

    Contributors

    This project exists thanks to all the people who contribute. [Contribute].

    Backers

    Thank you to all our backers! 🙏 [Become a backer]

    Sponsors

    Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor]


    fct fccn ulisboa ist hff
    Departments
    dei dei
    Laboratories
    sipg isr larsys iti inesc-id
    Domain
    eu pt
    Visit original content creator repository https://github.com/MIMBCD-UI/dataset-uta4-rates
  • dataset-uta4-rates

    UTA4: Rates Dataset

    License: AGPL v3 Last commit OpenCollective OpenCollective Gitter Twitter

    Several datasets are fostering innovation in higher-level functions for everyone, everywhere. By providing this repository, we hope to encourage the research community to focus on hard problems. In this repository, we present our severity rates (BIRADS) of clinicians while diagnosing several patients from our User Tests and Analysis 4 (UTA4) study. Here, we provide a dataset for the measurements of severity rates (BIRADS) concerning the patient diagnostic. Work and results are published on a top Human-Computer Interaction (HCI) conference named AVI 2020 (page). Results were analyzed and interpreted from our Statistical Analysis charts. The user tests were made in clinical institutions, where clinicians diagnose several patients for a Single-Modality vs Multi-Modality comparison. For example, in these tests, we used both prototype-single-modality and prototype-multi-modality repositories for the comparison. On the same hand, the hereby dataset represents the pieces of information of both BreastScreening and MIDA projects. These projects are research projects that deal with the use of a recently proposed technique in literature: Deep Convolutional Neural Networks (CNNs). From a developed User Interface (UI) and framework, these deep networks will incorporate several datasets in different modes. For more information about the available datasets please follow the Datasets page on the Wiki of the meta information repository. Last but not least, you can find further information on the Wiki in this repository. We also have several demos to see in our YouTube Channel, please follow us.

    Citing

    We kindly ask scientific works and studies that make use of the repository to cite it in their associated publications. Similarly, we ask open-source and closed-source works that make use of the repository to warn us about this use.

    You can cite our work using the following BibTeX entry:

    @inproceedings{10.1145/3399715.3399744,
    author = {Calisto, Francisco Maria and Nunes, Nuno and Nascimento, Jacinto C.},
    title = {BreastScreening: On the Use of Multi-Modality in Medical Imaging Diagnosis},
    year = {2020},
    isbn = {9781450375351},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3399715.3399744},
    doi = {10.1145/3399715.3399744},
    abstract = {This paper describes the field research, design and comparative deployment of a multimodal medical imaging user interface for breast screening. The main contributions described here are threefold: 1) The design of an advanced visual interface for multimodal diagnosis of breast cancer (BreastScreening); 2) Insights from the field comparison of Single-Modality vs Multi-Modality screening of breast cancer diagnosis with 31 clinicians and 566 images; and 3) The visualization of the two main types of breast lesions in the following image modalities: (i) MammoGraphy (MG) in both Craniocaudal (CC) and Mediolateral oblique (MLO) views; (ii) UltraSound (US); and (iii) Magnetic Resonance Imaging (MRI). We summarize our work with recommendations from the radiologists for guiding the future design of medical imaging interfaces.},
    booktitle = {Proceedings of the International Conference on Advanced Visual Interfaces},
    articleno = {49},
    numpages = {5},
    keywords = {user-centered design, multimodality, medical imaging, human-computer interaction, healthcare systems, breast cancer, annotations},
    location = {Salerno, Italy},
    series = {AVI '20}
    }
    

    Table of contents

    Prerequisites

    The following list is showing the required dependencies for this project to run locally:

    • Git or any other Git or GitHub version control tool
    • Python (3.5 or newer)

    Here are some tutorials and documentation, if needed, to feel more comfortable about using and playing around with this repository:

    Usage

    Usage follow the instructions here to setup the current repository and extract the present data. To understand how the hereby repository is used for, read the following steps.

    Installation

    At this point, the only way to install this repository is manual. Eventually, this will be accessible through pip or any other package manager, as mentioned on the roadmap.

    Nonetheless, this kind of installation is as simple as cloning this repository. Virtually all Git and GitHub version control tools are capable of doing that. Through the console, we can use the command below, but other ways are also fine.

    git clone https://github.com/MIMBCD-UI/dataset-uta4-rates.git

    Optionally, the module/directory can be installed into the designated Python interpreter by moving it into the site-packages directory at the respective Python directory.

    Demonstration

    Please, feel free to try out our demo. It is a script called demo.py at the src/ directory. It can be used as follows:

    python src/demo.py

    Just keep in mind this is just a demo, so it does nothing more than downloading data to an arbitrary destination directory if the directory does not exist or does not have any content. Also, we did our best to make the demo as user-friendly as possible, so, above everything else, have fun! 😁

    Roadmap

    CII Best Practices

    We need to follow the repository goal, by addressing the thereby information. Therefore, it is of chief importance to scale this solution supported by the repository. The repository solution follows the best practices, achieving the Core Infrastructure Initiative (CII) specifications.

    Besides that, one of our goals involves creating a configuration file to automatically test and publish our code to pip or any other package manager. It will be most likely prepared for the GitHub Actions. Other goals may be written here in the future.

    Contributing

    This project exists thanks to all the people who contribute. We welcome everyone who wants to help us improve this downloader. As follows, we present some suggestions.

    Issuer

    Either as something that seems missing or any need for support, just open a new issue. Regardless of being a simple request or a fully-structured feature, we will do our best to understand them and, eventually, solve them.

    Developer

    We like to develop, but we also like collaboration. You could ask us to add some features… Or you could want to do it yourself and fork this repository. Maybe even do some side-project of your own. If the latter ones, please let us share some insights about what we currently have.

    Information

    The current information will summarize important items of this repository. In this section, we address all fundamental items that were crucial to the current information.

    Related Repositories

    The following list, represents the set of related repositories for the presented one:

    Dataset Resources

    To publish our datasets we used a well known platform called Kaggle. To access our project’s Profile Page just follow the link. For the purpose, three main resources uta4-singlemodality-vs-multimodality-nasatlx, uta4-sm-vs-mm-sheets and uta4-sm-vs-mm-sheets-nameless are published in this platform. Moreover, the Single-Modality vs Multi-Modality is available in our MIMBCD-UI Project page on data.world. Last but not least, datasets are also published at figshare and OpenML platforms.

    License & Copyright

    Copyright © 2020 Instituto Superior Técnico

    Creative Commons License

    The dataset-uta4-rates repository is distributed under the terms of GNU AGPLv3 license and CC-BY-SA-4.0 copyright. Permissions of this license are conditioned on making available complete elements from this repository of licensed works and modifications, which include larger works using a licensed work, under the same license. Copyright and license notices must be preserved.

    Team

    Our team brings everything together sharing ideas and the same purpose, developing even better work. In this section, we will nominate the full list of important people for this repository, as well as respective links.

    Authors

    Promoters

    • Hugo Lencastre
    • Nádia Mourão
    • Bruno Dias
    • Bruno Oliveira
    • Luís Ribeiro Gomes
    • Carlos Santiago

    Acknowledgements

    This work was partially supported by national funds through FCT and IST through the UID/EEA/50009/2013 project, BL89/2017-IST-ID grant. We thank Dr. Clara Aleluia and her radiology team of HFF for valuable insights and helping using the Assistant on their daily basis. From IPO-Lisboa, we would like to thank the medical imaging teams of Dr. José Carlos Marques and Dr. José Venâncio. From IPO-Coimbra, we would like to thank the radiology department director and the all team of Dr. Idílio Gomes. Also, we would like to provide our acknowledgments to Dr. Emília Vieira and Dr. Cátia Pedro from Hospital Santa Maria. Furthermore, we want to thank all team from the radiology department of HB for participation. Last but not least, a great thanks to Dr. Cristina Ribeiro da Fonseca, who among others is giving us crucial information for the BreastScreening project.

    Supporting

    Our organization is a non-profit organization. However, we have many needs across our activity. From infrastructure to service needs, we need some time and contribution, as well as help, to support our team and projects.

    Contributors

    This project exists thanks to all the people who contribute. [Contribute].

    Backers

    Thank you to all our backers! 🙏 [Become a backer]

    Sponsors

    Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor]


    fct fccn ulisboa ist hff
    Departments
    dei dei
    Laboratories
    sipg isr larsys iti inesc-id
    Domain
    eu pt
    Visit original content creator repository https://github.com/MIMBCD-UI/dataset-uta4-rates
  • erm_marspeople

    ERM Mars People (Beta release)

    This mod adds Mars People from Metal Slug as enemy. This mod is to test 2 ways animations.

    They are weak against Acid/Poison (Max 85 resist).

    Discord: https://discord.gg/BwWXygyEyQ

    ERM – Features Reel: https://www.youtube.com/watch?v=phLRReAjxHA

    ###Credit:
    Code files license under GNU LGPLv3

    All graphics and sounds in this mod are properties of SNK Corporation, Metal Slug series. They are used for experimental
    purposes. Graphic assets are converted from the sprites in https://retrogamezone.co.uk/metalslug/creatureenemies.htm

    Requirement

    • Enemy Race Manager >= 1.15
    • Factorio Standard Library >= 1.4.6
    • Factorio Base >= 1.1

    Features

    Tier 1 Units

    Mars People – Normal

    • Attack: Laser/Explosion
    Mini Ufo

    • Attack: Laser

    Tier 2 Units

    Mars People – Icy

    • Attack: Cold (slow movement)
    Eye ball UFO

    • Attack: Eletric (1 radius AOE)
    Laser UFO

    • Attack: Laser
    Daimanji – Dropship

    • Drops Marspeople / Mini UFOs
    Builder

    • Turret, “Exits” structures

    Tier 3 Units

    Marspeople – Fire

    • Attack: Fire
    UFO

    • Attack: Physical
    Daimanji – Purpleball

    • Attack: Laser/Electric (2 radius AOE)
    Daimanji – Thunderbolt

    • Attack: Laser/Electric (3 radius AOE) (slow movement)

    Physical resistance: 95
    Weak elemental resistance: 85
    Elemental resistance: 90

    Visit original content creator repository
    https://github.com/heyqule/erm_marspeople

  • system-paths-clipboard

    System Paths Clipboard

    The System Paths Clipboard application is an extended clipboard tool that allows users to store, edit, and conveniently use recently copied file paths. By utilizing the keyboard shortcut Ctrl + C, the application saves system paths (Windows/Linux) into a database, enabling users to easily access and paste them later by clicking on the application icon (the application can also be opened with the Ctrl + left Shift shortcut) and searching for the desired path. The most recently copied path (by default, after copying a path, it becomes the selected one for pasting) or a path chosen from the application window can be pasted at any time using the shortcut Ctrl + B. Within the application, paths can be edited. Using a checkbox, users can choose between edit modes (save as a new path / edit). An edited path (or a newly saved one) that matches an existing path is moved to the top of the list and is not added again to avoid duplicates.

    Screenshots

    2

    1

    Application Components

    AppManager (Application Manager)

    • MainWindow: Manages the user interface and serves as the main window of the application.

    • KeyListener: Manages keyboard shortcuts and the copying and pasting of content.

    • ClipboardManager: Manages clipboard content and acts as a bridge between the UI and the database.

    Requirements

    1. Python: Install Python 3.x. You can download Python from python.org.

    2. Libraries: Install required libraries using the following command:

      cd project
      pip install -r requirements.txt

      Application required:

      • pynput

      • pyperclip

      • PyQt5

      • PyAutoGui

    Running the Application

    1. Download the code: Copy the Paths Manager application code.

        git clone https://github.com/ZuzannaZawartka/system-paths-clipboard.git
    2. Run from Terminal or Command Prompt:

      • Open a terminal (Linux/Mac) or command prompt (Windows).

      • Navigate to the directory containing the main.py file of the Clipboard Manager application

        cd project
      • Install required libraries

         pip install -r requirements.txt
      • Run the application by entering the command:

        python main.py

    How to Use the Application

    1. Launch the application.

    2. The application will appear as an icon in the bottom right corner of the screen (Open the application by clicking the icon or using the Ctrl + Left Shift shortcut )

    3. Use keyboard shortcuts (e.g., Ctrl+C, Ctrl+B) to manage the clipboard. Each path copied using Ctrl+C will appear in the application window.

    4. Add, remove, and edit stored items using the user interface.

    Graphics Resources

    • Application Icon: The application icon was obtained from freepik.com Icon designed by zafdesign
    Visit original content creator repository https://github.com/ZuzannaZawartka/system-paths-clipboard
  • sitecore-itemizer

    What is it?

    A couple of powershell scripts and a gulpfile.

    What does it do?

    It parses .item files, grabs fields (blob in this case, but can be extended), and generates tangible content items from those fields.

    Example: /sitecore/assets/css_asset.item -> /sitecore/assets/css_asset.css

    It also does the inverse, when a change is made to the content item, it updates the associated .item file (right now it creates another version of the field in the item).

    Example: /sitecore/assets/css_asset.css-> /sitecore/assets/css_asset.item

    It only will update the associated files if there has been a change detected and the field content does not match (so you won’t get stuck in an infinite update loop)

    What do I need it for?

    The first use case I thought for this was SXA, or the local development of themes for SXA, to help aid the development workflow around assets that will exist in Sitecore itself (css/js etc). I also figure it could be used outside of that, anywhere where you have assets stored in the file system that you will eventually want in Sitecore (media library for example).

    How do I use it

    Should be as easy as:

    1. Clone, download, whatever, just get these files into your project

    2. Update the paths.themeSrc and paths.itemSrc variables in the gulpfile.js

    3. npm install to get the needed dependencies (yes this is assuming you have node installed on your local machine)

    4. gulp default to start the gulp watchers (yellow is a change to an asset, blue is a change to an item)

    Visit original content creator repository
    https://github.com/vandsh/sitecore-itemizer

  • iron-array

    Azure CI

    iron-array

    Setup

    Clone

    git clone --recurse-submodules git@github.com:inaos/iron-array.git
    

    Git commit-hooks

    Execute the following commands:

      cp conf/pre-commit .git/hooks/
    

    Build

    We use inac cmake build-system in combination with different libraries which can be installed using miniconda3. In particular, one can install LLVM from the numba channel, and MKL and SVML from Intel channel in a cross-platform portable way with:

    $ conda install 'llvmdev>=13'
    $ conda install -c intel mkl-include
    $ conda install -c intel mkl-static
    $ conda install -c intel icc_rt    # SVML
    

    Note: it looks like recent versions of conda (when using MacOSX at least) have dependency issues when installing the provious packages. You can find a workaround by using mamba instead. You can install mamba with:

    $ conda install mamba -n base -c conda-forge
    

    It is worth noting that conda-forge channel should only be used for installing mamba. In particular, I have detected issues when using the llvmdev package in conda-forge!

    Beware: currently ironArray only supports LLVM 11. Also, we strongly suggest to use the numba channel with conda/mamba for installing the LLVM package. In particular, we discourage the use of native LLVM libraries in the system (either using apt, brew or any other packager), even if they are installing LLVM 10 (the numba team seems to be doing a great job in packaging).

    Windows

    • INAC build setup

      • Make sure that you have a configured repository.txt file in ~.inaos\cmake
      • Also you’ll need a directory ~\INAOS (can be empty)
    • Create a build folder

         mkdir build
         cd build
      
    • Invoke CMAKE, we have to define the generator as well as the build-type

         cmake -G"Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Debug ..
         cmake -G"Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
      

    Mac

    • INAC build setup:

      • Make sure that you have a configured repository.txt file in ~/.inaos/cmake
      • Also you’ll need a directory ~/INAOS (can be empty)
    • Create a build folder:

         mkdir build
         cd build
      
    • Invoke CMAKE, we have to define the build-type:

         cmake -DCMAKE_BUILD_TYPE=Debug ..
         cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
      

    Linux

    • INAC build setup

      • Make sure that you have a configured repository.txt file in ~/.inaos/cmake
      • Also you’ll need a directory ~/INAOS (can be empty)
    • MKL setup. For Ubuntu machines, it is best to use Intel’s Ubuntu repo (but you can use conda packages described above too):

         wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
         apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
         sudo sh -c 'echo deb https://apt.repos.intel.com/mkl all main > /etc/apt/sources.list.d/intel-mkl.list'
         sudo apt-get update && sudo apt-get install intel-mkl-64bit-2019.X
      
    • Create a build folder

         mkdir build
         cd build
      
    • Invoke CMAKE, we have to define the build-type, but only two types are supported

         cmake -DCMAKE_BUILD_TYPE=Debug ..
         cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
      
    • Some Linux machines (ClearLinux, Gentoo?) require the use of llvm-config utility. You can enforce its use with -DDISABLE_LLVM_CONFIG=False::

         cmake -DCMAKE_BUILD_TYPE=Debug -DDISABLE_LLVM_CONFIG=False ..
         cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DDISABLE_LLVM_CONFIG=False ..
      

    Tracing

    Sometimes it is useful to activate the tracing mechanism for debugging purposes. Example:

    $ env INAC_TRACE='*' ./perf_vector_expression  -e 1 -E 2 -M 3 -t 10 -l 0 -c 9 -f 2
    Time for computing and filling X values: 0.0523 s, 2918.5 MB/s
    Time for compressing and *storing* X values: 0.106 s, 1434.1 MB/s
    Compression for X values: 152.6 MB -> 11.2 MB (13.6x)
    Time for computing and filling Y values: 0.0665 s, 2296.2 MB/s
    Time for compressing and *storing* Y values: 0.135 s, 1130.3 MB/s
    Compression for Y values: 152.6 MB -> 152.6 MB (1.0x)
    [iarray.error] - Error compressing a blosc chunk /Users/faltet/inaos/iron-array/src/iarray_expression.c:853
    Error during evaluation.  Giving up...
    

    Expressions

    • For now only element-wise operations are supported in expressions.

    • The iron-array library supports disabling the SVML optimization by setting a DISABLE_SVML environment variable to any value. This can be useful for debugging purposes.

    Visit original content creator repository https://github.com/inaos/iron-array