Author: 13l50qfydqlo

  • antlr4-experiments



    Language: PT-BR

    Here are my studies and experiments using ANTLR4 (for language analysis). The target code in this case is in C++, and for this a base project is used, which can be found here:

    Just download the project and leave it in the same directory with the experiments.

    Note: the base project directory name must be project, so that the test build scripts can work properly.

    If my code has helped you, please consider sponsoring me πŸ’™


    πŸ“‘ Table of Contents



    πŸ› οΈ Instructions

    To run a test, just run the script clean-and-build.sh or re-compile.sh followed by the name of the test directory, as in the example below.

    ./clean-and-build.sh "1 - processing tokens (through visitor)"
    ./re-compile.sh "1 - processing tokens (through visitor)"

    After compiling, parsing is immediately executed, where the syntax file existing in the respective directory is passed as a parameter.

    Thus, it is possible to change the test Lexer, Parser and Syntax files, as well as the main.cpp file in their respective directories.

    Brief description of the scripts:

    • clean.sh – Cleans files from previous builds from the base project. Note: this script does not need parameters.
    • clean-and-build.sh – Recompiles 100% of the base project.
    • re-compile.sh – Compile the base project without having to compile 100% of the project.

    Note: the file compiled by the base project is located inside the project directory, with the name parsing.


    πŸ˜ƒ Author

    Sponsor: melchisedech333
    Twitter: Melchisedech333
    LinkedIn: Melchisedech Rex
    Blog: melchisedech333.github.io


    πŸ“œ License

    BSD-3-Clause license


    Remember to give me
    a beautiful little star 🀩

    Visit original content creator repository https://github.com/melchisedech333/antlr4-experiments
  • Lottery-dapp

    Visit original content creator repository
    https://github.com/priyamshah112/Lottery-dapp

  • sonus-presto

    SonusPresto

    A bare-bones music player. Intended to utilize a filesystem to browse your music instead of automatically grouping by artist, album or genre. Focused on gesture controls. Supports CUE sheets, M3U playlists and Internet radio.

    Minimum supported Android version: 5.0 (Lollipop, API 21)

    Features

    Here’s what you can do in SonusPresto:

    • play music

    • browse your music in the way you have structured it, using the filesystem

    • play all files in a selected folder and sub-folders recursively

    • open CUE sheets and M3U playlists as folders

    • gapless playback

    • play Internet radio from M3U playlists

    • change a visual theme and language (English/Russian)

    • control a playback using touch and swipe gestures on a single button

    • listen to the audio from the video files

    • delete files and folders

    Non-features

    Here’s what SonusPresto can’t do:

    • view audio tags or cover art

    • quickly set a precise playback position

    • view the current playback position in seconds

    • use a separate buttons to control a playback (i.e. there’s no separate prev/stop/next/… buttons)

    • create custom playlists or manage a playback queue

    • basically, anything else that is not listed in the “Features” section πŸ™‚

    Disclaimer: new features will most likely never be added.

    Specifics

    Here are some quirks:

    • the swiping of the bottom button may not work on some devices with specific Android gesture settings

    • since SonusPresto doesn’t read tags, it can’t determine actual artist and album name of a music track and instead it just uses folder names for that (except for playlist items)

    • SonusPresto may not be compatible with LastFM scrobblers, i.e. it will most likely send incorrect info because it does not use tags

    • SonusPresto doesn’t know what formats your device supports, so it will just show every file that has any of the supported extensions (i.e. not all displayed files can actually be played)

    Screenshots

    Dark theme with a regular music track Options popup and a folder highlight Light theme with Internet radio

    Download

    Download the latest app version here.

    Build

    SonusPresto is made with Flutter.

    To build this application do the following:

    1. Download this repository.

    2. Install Flutter and Android SDK. It’s easier to do it from Android Studio.

    3. At this point you can already debug the application from Android Studio. To build the release version follow the next steps.

    4. Go inside the repository root and create the file android/key.properties based on android/key.template.properties. Fill in all fields. For more information see the official “Signing the app” tutorial.

    5. To build the release APK run ./build-apk.sh inside the repository root.

    License

    GPLv3

    Visit original content creator repository https://github.com/alkatrazstudio/sonus-presto
  • react-animate-props

    react-animate-props

    React HOC (higher order component) method, and React Hook for transforming
    your favorite components to animate prop values on change.

    This package uses Tweenkle for handling
    the tweening of prop values. It’s not as full-featured as GSAP,
    but it works pretty well for basic value and object tweening.

    Install

    Via npm

    npm install --save react-animate-props

    Via Yarn

    yarn add react-animate-props

    How to use

    react-animate-props now offers two(!) ways for you to animate the props in both
    your class-based, and functional, React components.

    Hook

    useAnimateProps

    Parameters

    • prop : Number – Value to animate
    • options : Object – Options to define the tween properties to use.

    Default options:

    {
      delay: 0,                           // Delay to apply before the tween starts
      duration: 1000,                     // Duration of the tween in milliseconds
      ease: Easing.Quad.Out,              // Ease to use for the tween, @see [Tweenkle](https://github.com/ryanhefner/tweenkle) for options
      onAnimateProgress: value => value,  // Callback to use during the tweening process, as well as being able to manipulate the value during the tween
      onAnimateComplete: value => value,  // Callback for when the tween has completed, as well as being able to manipulate the final value of the tween
    }

    Example

    import React from 'react';
    import { Easing } from 'tweenkle';
    import { useAnimateProps } from 'react-animate-props';
    
    const AnimatedNumberLabel = ({ number }) => {
      const animatedNumber = useAnimateProps(number, {
        ease: Easing.Quad.In,
        delay: 500,
        duration: 1500,
        onAnimateProgress: value => {
          return Math.round(value);
        },
        onAnimateComplete: value => {
          return Math.round(value);
        },
      });
    
      return <span>{animatedNumber}</span>;
    };
    
    export default AnimatedNumberLabel;

    HOC (Higher Order Component)

    animateProps is a higher order component
    that allows you to easily create components who’s props animate when changed.

    Whether you’re writing a new component, or would like to make an animated version
    of an existing component, just export your component and pass it through, animateProps.

    Parameters

    • component:Class – Class to apply animateProps logic to.

    • defaultProps:Object – Default props declared for the component being animated. (Default: {})

    Properties

    • animatedProps:Object – Object defining which props to animate, and the tween
      settings for each. animateProps uses the Tweenkle
      tweening library, specifically a Tween instance, and you can pass whatever props that
      library supports via the tween settings. You can find out more by reading the
      Tweenkle README.

    • onAnimateProgress:Function – Callback available to manipulate the prop before
      it is applied to the state. (Example: (prop, value) => { return { [prop]: value }; })

    • onAnimateComplete:Function – Callback fired when the animation for a prop completes.
      (Example: (prop, value, tweensActive) => {})

    Example

    import React, { Component } from 'react';
    import PropTypes from 'prop-types';
    import animateProps from 'react-animate-props';
    import { Easing } from 'tweenkle';
    
    class AnimatedNumberLabel extends Component {
      render() {
        const {
          number,
        } = this.props;
    
        return (
          <span>
            {number}
          </span>
        );
      }
    }
    
    AnimatedNumberLabel.propTypes = {
      animatedProps: PropTypes.object,
      number: PropTypes.number,
      onAnimateProgress: PropTypes.func,
    };
    
    AnimatedNumberLabel.defaultProps = {
      animatedProps: {
        number: {
          ease: Easing.Quad.In,
          delay: 500,
          duration: 1500,
        },
      },
      number: 0,
      onAnimateProgress: (prop, value) => {
        return {
          [prop]: Math.round(value),
        };
      },
      onAnimateComplete: (prop, value, tweensActive) => {
        return {
          [prop]: Math.round(value),
        };
      },
    };
    
    export default animateProps(
      AnimatedNumberLabel,
      AnimatedNumberLabel.defaultProps
    );

    License

    MIT Β© Ryan Hefner

    Visit original content creator repository
    https://github.com/ryanhefner/react-animate-props

  • MD_MIDIFile

    MD_MIDIFile Standard MIDI File Interpreter Library

    arduino-library-badge

    This library allows Standard MIDI Files (SMF) to be read from an SD card and played through a MIDI interface. SMF can be opened and processed, with MIDI and SYSEX events passed to the calling program through callback functions. This allows the calling application to manage sending to a MIDI synthesizer through serial interface or other output device, such as a MIDI shield.

    • SMF playing may be controlled through the library using methods to start, pause and restart playback.
    • SMF may be automatically looped to play continuously.
    • Time ticks are normally generated by the library during playback, but this can be taken over by the user program if different time or synchronization with external MIDI clock is required.

    External dependencies:

    • SdFat library found here used by the library to read SMF from the the SD card.
    • MIDI interface hardware as described in the library documentation or similar, a USB interface with converter or a MIDI shield.

    If you like and use this library please consider making a small donation using PayPal

    Library Documentation

    Visit original content creator repository https://github.com/MajicDesigns/MD_MIDIFile
  • gobin-info

    gobin-info

    build Go Report Card GitHub Releases

    gobin-info lists your locally installed Go binaries alongside their version and original Git repository.

    It’s kind of like a convenience wrapper around go version -m ... with some niceties on top, like vanity URL resolving.

    Installation

    go install github.com/philippgille/gobin-info@latest

    Usage

    You can run gobin-info in several modes:

    • gobin-info /path/to/dir lists info about the Go binaries in a given directory (relative or absolute)
    • gobin-info -wd lists info about the Go binaries in your working directory
    • gobin-info -gobin lists info about the Go binaries in your $GOBIN directory
    • gobin-info -gopath lists info about the Go binaries in your $GOPATH/bin directory
    • 🚧 gobin-info -path lists info about the Go binaries in your $PATH (not implemented yet)

    It prints a (❓) after the URL in case the URL couldn’t be reliably determined.

    Note: gobin-info doesn’t recurse into subdirectories. This might be added with an optional flag in the future.

    Example

    $ gobin-info -gopath
    Scanning /home/johndoe/go/bin
    arc         v3.5.1  https://github.com/mholt/archiver
    gopls       v0.11.0 https://go.googlesource.com/tools
    mage        (devel) https://github.com/magefile/mage
    staticcheck v0.3.3  https://github.com/dominikh/go-tools
    

    Raison d’Γͺtre

    Most of your CLI tools were probably installed with a package manager like apt or dnf on Linux, Homebrew on macOS, or Scoop on Windows. Then if you want to get the list of your installed tools, you can run apt list --installed, brew list or scoop list to list them, and if you want to know more about one of them you can run apt show ..., brew info ... or scoop info ....

    But what about the ones you installed with Go? You installed them with go install ... and they live in $GOPATH/bin or $GOBIN or maybe you move/symlink them to /usr/local/bin or so.

    • Now you don’t immediately know the origin of the tools. For example if there’s a binary called arc, is it github.com/mholt/archiver/v3/cmd/arc or github.com/evilsocket/arc/cmd/arc?
    • You could run arc --help and it might give a hint what exactly it is, but it’s not reliable
    • Or you run go version -m /path/to/arc and among the dozens of output lines you check the path or mod
      • But their values are not https://-prefixed, so you can’t click them in your terminal and have to copy paste them into your browser
      • Then for example arc has the module path github.com/mholt/archiver/v3, which leads to a 404 Not Found error on GitHub because of the v3
      • And for staticcheck the module path is honnef.co/go/tools, which is a vanity URL that doesn’t point to the original Git repository (https://github.com/dominikh/go-tools) and the browser also doesn’t redirect to it

    gobin-info makes all of this much easier.

    Visit original content creator repository https://github.com/philippgille/gobin-info
  • look-laugh

    Link to the deployed website

    https://theoriginalison.github.io/look-laugh/

    Files and Directories

    index.html
    script.js
    style.css
    README.md
    >images 
    

    look-laugh

    Files contain the Look and Laugh App. A mental health and wellness application to help relieve stress and anxiety.

    Description: Look & Laugh App “Search a word, have a laugh. Save your images, and jokes for later. Happy Searching!”

    As developers of the University of Pennsylvania LPS Bootcamp, we wanted to develop an application that would allow coders to take a quick break to destress and refocus. The “Look & Laugh” gives coders a way to break a laugh quickly.

    The technologies used were: Zoom, Google Docs, Slack, GitHub, API’s, Google Font’s, HTML, CSS, Pure.css, Javascript, jQuery, AJAX, and Local Storage. For a more extensive preview of each visit the Look & Laugh Google Documentation.

    Button configurations:

    • Search: Returns a randomized image, and/or joke based on user’s search input.
    • Save: Images, and jokes will be sent to “Favorites” section for user to view later.
    • Clear: User can clear images, and joke from “Favorites section”.

    Look & Laugh App Preview

    Credits

    Β© 2020 Roman, Deirdre, Rocky, & Alison

    Visit original content creator repository https://github.com/theoriginalison/look-laugh
  • Pneumonia_Classification

    Pneumonia Image Classification

    Author: Andy Peng

    The contents of this repository detail an analysis of the Pneumonia Image Classification project. This analysis is detailed in hopes of making the work accessible and replicable.

    Business problem:

    The task is to create a model that can accurately predict whether the patient has Pneumonia or not given a patient’s chest xray image.

    Data

    Our dataset comes from Kaggle. The dataset contains three folders training, validation and testing. Each folder is filled with chest xray images used for training and testing the model that we will create.

    Methods

    • Descriptive Analysis
    • Modeling
    • Choices made
    • Key relevant findings from exploritory data analysis

    Results

    Visual 1

    > Normal Chest XRay

    Visual 2

    > Pneumonia Chest XRay

    Visual 3

    > First Activation of ModelI

    Visual 4

    > Sixth Activation of ModelI

    Models

    > ROC Curve of the different Models

    > Model Results
    • Accuracy/ Recall – Baseline Model
    • Precision/F1 Score – Imbalance Model

    Recommendations:

    To summarize everything above, we can see from above that

    • Recall/Accuracy – ModelB
    • Precicion/F1 Score/ AUC – ModelI

    Our goal is to minimize the amount of patients we classify as healthy when they do indeed have Pneumonia. Therefore we would want to minimize false negatives in other words maximizing recall. Our recommendation is to stick with ModelI. Although modelB was slightly better in recall and accuracy, there was only a slight difference in the recall and accuracy score. Also ModelI did better in precision score, F1 score and AUC score. Therefore ModelI is the best model to use for predictions.

    Limitations & Next Steps

    There are many things that we didn’t due to lack of time and money constraints. For example, we can ask a doctor what they would look at in a chest xray image to determine whether a patient has Pneumonia or not. We could also use cross validation or gather more data to further improve our models. (Future Work – Include RNN Model)

    For further information

    Please review the narrative of our analysis in our jupyter notebook or review our presentation

    For any additional questions, please contact andypeng93@gmail.com

    Repository Structure:

    Here is where you would describe the structure of your repoistory and its contents, for example:

    
    β”œβ”€β”€ README.md                       <- The top-level README for reviewers of this project.
    β”œβ”€β”€ Image Classification.ipynb             <- narrative documentation of analysis in jupyter notebook
    β”œβ”€β”€ presentation.pdf                <- pdf version of project presentation
    └── Visualizations
        └── images                          <- both sourced externally and generated from code
    
    
    Visit original content creator repository https://github.com/andypeng93/Pneumonia_Classification
  • Pneumonia_Classification

    Pneumonia Image Classification

    Author: Andy Peng

    The contents of this repository detail an analysis of the Pneumonia Image Classification project. This analysis is detailed in hopes of making the work accessible and replicable.

    Business problem:

    The task is to create a model that can accurately predict whether the patient has Pneumonia or not given a patient’s chest xray image.

    Data

    Our dataset comes from Kaggle. The dataset contains three folders training, validation and testing. Each folder is filled with chest xray images used for training and testing the model that we will create.

    Methods

    • Descriptive Analysis
    • Modeling
    • Choices made
    • Key relevant findings from exploritory data analysis

    Results

    Visual 1

    > Normal Chest XRay

    Visual 2

    > Pneumonia Chest XRay

    Visual 3

    > First Activation of ModelI

    Visual 4

    > Sixth Activation of ModelI

    Models

    > ROC Curve of the different Models

    > Model Results
    • Accuracy/ Recall – Baseline Model
    • Precision/F1 Score – Imbalance Model

    Recommendations:

    To summarize everything above, we can see from above that

    • Recall/Accuracy – ModelB
    • Precicion/F1 Score/ AUC – ModelI

    Our goal is to minimize the amount of patients we classify as healthy when they do indeed have Pneumonia. Therefore we would want to minimize false negatives in other words maximizing recall. Our recommendation is to stick with ModelI. Although modelB was slightly better in recall and accuracy, there was only a slight difference in the recall and accuracy score. Also ModelI did better in precision score, F1 score and AUC score. Therefore ModelI is the best model to use for predictions.

    Limitations & Next Steps

    There are many things that we didn’t due to lack of time and money constraints. For example, we can ask a doctor what they would look at in a chest xray image to determine whether a patient has Pneumonia or not. We could also use cross validation or gather more data to further improve our models. (Future Work – Include RNN Model)

    For further information

    Please review the narrative of our analysis in our jupyter notebook or review our presentation

    For any additional questions, please contact andypeng93@gmail.com

    Repository Structure:

    Here is where you would describe the structure of your repoistory and its contents, for example:

    
    β”œβ”€β”€ README.md                       <- The top-level README for reviewers of this project.
    β”œβ”€β”€ Image Classification.ipynb             <- narrative documentation of analysis in jupyter notebook
    β”œβ”€β”€ presentation.pdf                <- pdf version of project presentation
    └── Visualizations
        └── images                          <- both sourced externally and generated from code
    
    
    Visit original content creator repository https://github.com/andypeng93/Pneumonia_Classification
  • Online-Fitness-Platform

    Online Fitness Platform

    This project focuses on developing a web-based MVC (Model-View-Controller) application for scheduling online fitness training sessions. The primary emphasis during the implementation was on creating essential UML (Unified Modeling Language) diagrams, including Class Diagrams, Activity Diagrams, Sequence Diagrams, and Use Case Diagrams.

    Technologies

    • Java
    • MVC
    • MySQL
    • Thymeleaf

    Roles

    1. Client
    2. Trainer
    3. Admin
    4. Platform Owner
    5. Sports Hour (Nutritionist)

    Functionality

    Registration and Profiles:

    • Clients and trainers register independently.
    • Registration data includes: name, surname, email, contact phone, address, credit card number, primary language, additional languages.
    • Clients provide additional details: height, weight, health status, goals, and a list of home workout equipment.
    • Trainers input qualifications, certificates, and titles. The account becomes active upon admin approval.

    Schedule and Booking:

    • Trainers maintain a list of available slots for the next month.
    • Clients can book slots with any trainer.
    • The schedule considers time zones of both clients and trainers.
    • Cancellation is allowed up to two hours before the session, after which it incurs a charge.

    Finances:

    • A percentage of each training fee goes to the platform.
    • Clients can work with different trainers for their sessions.

    Conducting Training:

    • Trainers tailor workouts to clients’ equipment and goals.
    • During sessions, trainers monitor clients via camera and data from sports devices.
    • Clients can input data regarding weight changes and other parameters.

    Ratings and Progress Tracking:

    • After each session, both the trainer and client provide ratings (stars and comments).
    • Trainers can track the progress of clients they’ve worked with.

    Reports for the Owner:

    • The platform generates reports for the owner, including earnings for a specific interval, daily, weekly, and monthly earnings.
    • Provides a list of top-rated trainers and those with the highest earnings.

    This platform facilitates effective tracking, training management, and financial performance tracking, offering users and the owner comprehensive insights into its operations.

    UML Diagrams

    Class Diagram

    Dijagram klasa

    Use Case Diagram

    Dijagram slucajeva koriscenja

    Activity Diagram

    Registracija_DijagramAktivnosti Kreiranje izvestaja

    Sequence Diagram

    Sekvenca Registracija_DijagramSekvence Model Dijagram sekvence kreiranje izvestaja

    Visit original content creator repository https://github.com/anna02272/Online-Fitness-Platform