Category: Blog

  • favicon

    <link rel="icon" type="image/x-icon" href="https://favicon.StateTagApp.com/favicon.ico">
    
    <link rel="apple-touch-icon"  href="https://favicon.StateTagApp.com/apple-icon-57x57.png">
    <link rel="apple-touch-icon"  href="https://favicon.StateTagApp.com/apple-icon-60x60.png">
    <link rel="apple-touch-icon"  href="https://favicon.StateTagApp.com/apple-icon-72x72.png">
    <link rel="apple-touch-icon"  href="https://favicon.StateTagApp.com/apple-icon-76x76.png">
    <link rel="apple-touch-icon"  href="https://favicon.StateTagApp.com/apple-icon-114x114.png">
    <link rel="apple-touch-icon"  href="https://favicon.StateTagApp.com/apple-icon-120x120.png">
    <link rel="apple-touch-icon"  href="https://favicon.StateTagApp.com/apple-icon-144x144.png">
    <link rel="apple-touch-icon"  href="https://favicon.StateTagApp.com/apple-icon-152x152.png">
    <link rel="apple-touch-icon"  href="https://favicon.StateTagApp.com/apple-icon-180x180.png">
    <link rel="icon" type="image/png"   href="https://favicon.StateTagApp.com/android-icon-192x192.png">
    <link rel="icon" type="image/png"  href="https://favicon.StateTagApp.com/favicon-32x32.png">
    <link rel="icon" type="image/png"  href="https://favicon.StateTagApp.com/favicon-96x96.png">
    <link rel="icon" type="image/png"  href="https://favicon.StateTagApp.com/favicon-16x16.png">
    

    Visit original content creator repository
    https://github.com/StateTagApps/favicon

  • Login_page

    This project was bootstrapped with Create React App.

    Available Scripts

    In the project directory, you can run:

    npm start

    Runs the app in the development mode.
    Open http://localhost:3000 to view it in the browser.

    The page will reload if you make edits.
    You will also see any lint errors in the console.

    npm test

    Launches the test runner in the interactive watch mode.
    See the section about running tests for more information.

    npm run build

    Builds the app for production to the build folder.
    It correctly bundles React in production mode and optimizes the build for the best performance.

    The build is minified and the filenames include the hashes.
    Your app is ready to be deployed!

    See the section about deployment for more information.

    npm run eject

    Note: this is a one-way operation. Once you eject, you can’t go back!

    If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

    Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

    You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

    Learn More

    You can learn more in the Create React App documentation.

    To learn React, check out the React documentation.

    Code Splitting

    This section has moved here: https://facebook.github.io/create-react-app/docs/code-splitting

    Analyzing the Bundle Size

    This section has moved here: https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size

    Making a Progressive Web App

    This section has moved here: https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app

    Advanced Configuration

    This section has moved here: https://facebook.github.io/create-react-app/docs/advanced-configuration

    Deployment

    This section has moved here: https://facebook.github.io/create-react-app/docs/deployment

    npm run build fails to minify

    This section has moved here: https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify

    Visit original content creator repository
    https://github.com/01062000/Login_page

  • mlogger

    MLOGGER

    About: module for multi file level logging
    Author: F.Pessolano
    Licence: MIT

    NO LONGER SUPPORTED

    Description
    Module for implementing centralised level loggers with multiple files and aggregation of log lines.
    Each log line follows this format:

    DATE_LINE_CREATION    ID  LEVEL   MESSAGE DATA    DATE_LAST_CHANGE
    

    The mlogger module supports as level LOG, INFO, ERROR, WARNING, RECOVERED amd PANIC. Apart from the traditional message, data can be included in the log line
    and it can be accumulated over time (in this case the last update date is also added as of the first accumulation).
    Log files are stored in the folder ./log/

    Initialisation
    A logger is initialised with:

    logId, er := mlogger.DeclareLog(name, date) 
    

    Where name is the logfile name. A log file is created everyday and the date appended to the name if date is true.
    An id is given in logId (int) and an er error returned if any.
    The logfile is formatted with the methods:

    mlogger.SetTextLimit(logId, lm, li, ll)
    

    Where lm, li and ll are the number of maximum characters to be used for the message text, id and level. If 0 is given, no restriction will be used.
    The logs can also be set to echo all written liones to the console by togglingthe verbose flag:

    mlogger.Verbose({true|false})
    

    Usage
    A log line can be stored by using a method associated to a given level (Log, Info, Error, Warning, Recovered amd Panic). For example:

    mlogger.Log(logId, LoggerData{Id: string, Message: string, Data: []int, Agregate: bool})  
    

    LoggerData is a struct containing the log line data.
    When Aggregate is true, the data in Data will be summed and the old first written log line will be updated with the new value and the latest modification date.
    The mlogger.Panic level method accept an additional parameter:

    mlogger.Panic(logId, LoggerData{Id: id, Message: message, Data: data, Agregate: aggregate}, quit)  
    

    Quit is a bool. When set, it will force the execution to brutally halt.

    Visit original content creator repository
    https://github.com/fpessolano/mlogger

  • AutoTruckX

    AutoTruckX

    An experimental project for autonomous vehicle driving perception with steering angle prediction and semantic segmentation.

    Semantic Segmentation

    Detailed description can be found at ./Semantic Segmentation/README.md.

    • SETR: A pure transformer encoder model and a variety of decoder unsampling models to perform semantic segmentation tasks. This model was adapted from and implemented based on the paper published in December 2020 by Sixiao Zheng et al., titled Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers. In particular, the SETR-PUP and SETR-MLA variants, that is, the models with progressive upsampling and multi-level feature aggregation decoders, are selected and implemented based on their state-of-the-art performance on benchmark datasets.
    • TransUNet: A UNet-transformer hybrid model that uses UNet to extract high-resolution feature maps, a transformer to tokenize and encode images, and a UNet-like mechanism to upsample in decoder using previously-extracted feature maps. This model was adapted from and implemented based on the paper published in February 2021 by Jieneng Chen et al., titled TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation.
    • UNet: the well-known UNet model. This variant of UNet, which is 4-layers deep in the architecture, is adapted and implemented based on the paper published in November 2018 by Ari Silburt et al., titled Lunar Crater Identification via Deep Learning.
    SETR TransUNet UNet
    What is this What is this What is this

    Figures above are authored in and extracted from the original papers respectively. What is this What is this What is this

    It can be observed that the model can perform reasonable semantic segmentation task when inferenced on test image and videos.

    Steering Angle Prediction

    Detailed description can be found at ./Steering Angle Prediction/README.md.

    • TruckNN: A CNN model adapted and modified from NVIDIA’s 2016 paper End to End Learning for Self-Driving Cars. The original model was augmented with batch normalization layers and dropout layers.
    • TruckResnet50: A CNN transfer learning model utilizing feature maps extracted by ResNet50, connected to additional fully-connected layers. This model was adapated and modified from Du et al.’s 2019 paper Self-Driving Car Steering Angle Prediction Based on Image Recognition. The first 141 layers of the ResNet50 layers (instead of the first 45 layers as in the original paper) were frozen from updating. Dimensions of the fully-connected layers were also modified.
    • TruckRNN: A Conv3D-LSTM model, also based on and modified from Du et al.’s 2019 paper mentioned above, was also experimented. The model consumes a sequence of 15 consecutive frames as input, and predicts the steering angle at the last frame. Comparing to the original model, maxpooling layers were omitted and batch normalization layers were introduced. 5 convolutional layers were implemented with the last convolutional layer connected with residual output, followed by two LSTM layers, which is rather different to the model architecture proposed in the paper.
    TruckNN TruckResnet50 TruckRNN
    What is this What is this What is this

    Figures above are authored in and extracted from the original papers respectively.

    What is this What is this

    For further visualization, saliency maps of the last Resnet50 Convolutional layer (layer4) can be observed as below: What is this What is this

    The model seems to possess salient features on the road.

    Visit original content creator repository https://github.com/shawnhan108/AutoTruckX
  • agent-action

    RelativeCI agent

    GitHub action that sends bundle stats and CI build information to RelativeCI.

    Other agents


    1. Usage
    1. Inputs
    2. Secrets

    Usage

    View action.yml

    push/pull_request events

    # .github/workflow/node.js.yml
    name: Node.js CI
    
    on:
      push:
        branches:
          - master
      pull_request:
    
    jobs:
      build:
        steps:
          - uses: actions/checkout@v4
          - uses: actions/setup-node@v4
            with:
              node-version: 'latest'
    
          # Install dependencies
          - run: npm ci
    
          # Build and output bundle stats
          # Learn more: https://relative-ci.com/documentation/setup/agent/github-action#step-1-output-bundle-stats-json-file
          - run: npm run build -- --json webpack-stats.json
          
          - name: Send bundle stats and build information to RelativeCI
            uses: relative-ci/agent-action@v3
            with:
              key: ${{ secrets.RELATIVE_CI_KEY }}
              token: ${{ secrets.GITHUB_TOKEN }}
              webpackStatsFile: ./webpack-stats.json

    workflow_run events

    Read more about workflows triggered by forked repositories.

    Build and upload bundle stats artifacts using relative-ci/agent-upload-artifact-action

    # .github/workflows/build.yaml
    name: Build
    
    on:
      push:
        branches:
          - master
      pull_request:
    
    jobs:
      build:
        runs-on: ubuntu-latest
        steps:
          - uses: actions/checkout@v4
          - uses: actions/setup-node@v4
            with:
              node-version: 'latest'
    
          # Install dependencies
          - run: npm ci
    
          # Build and output bundle stats to webpack-stats.json
          # Learn more: https://relative-ci.com/documentation/setup/agent/github-action#step-1-output-bundle-stats-json-file
          - run: npm run build -- --json webpack-stats.json
    
          # Upload webpack-stats.json to use on relative-ci.yaml workflow
          - name: Upload bundle stats artifact
            uses: relative-ci/agent-upload-artifact-action@v3
            with:
              webpackStatsFile: ./webpack-stats.json

    Send bundle stats and build information to RelativeCI

    The workflow runs securely in the default branch context(ex: main). relative-ci/agent-action uses the build information (commit, message, branch) corresponding to the commit that triggerd the Build workflow.

    # .github/workflows/relative-ci.yaml
    name: RelativeCI
    
    on:
      workflow_run:
        workflows: ["Build"]
        types:
          - completed
    
    jobs:
      build:
        runs-on: ubuntu-latest
        steps:
          - name: Send bundle stats and build information to RelativeCI
            uses: relative-ci/agent-action@v3.0.0-beta
            with:
              key: ${{ secrets.RELATIVE_CI_KEY }}
              token: ${{ secrets.GITHUB_TOKEN }}

    Inputs

    key

    Required RelativeCI project key

    token

    Required only when running during workflow_run to download the bundle stats artifacts
    GitHub API key

    webpackStatsFile

    Required when running during push or pull_request events
    Path to the bundle stats file

    Optional

    slug

    Default: GITHUB_REPOSITORY evironment variable

    Your project slug

    includeCommitMessage

    Default: true

    Fetch commit message from GitHub when the context commit is different that the commit that triggered the workflow (eg: pull_request event).

    debug

    Default: false

    Enable debug output

    artifactName

    Default: relative-ci-artifacts when running during workflow_run event

    The name of the artifact that containts the bundle stats uploaded by the triggering workflow

    Secrets

    RELATIVE_CI_KEY

    Your RelativeCI project key

    Visit original content creator repository
    https://github.com/relative-ci/agent-action

  • backbone-tutorial-series

    Backbone.js Tutorial Series

    This repository will store the code used in the Backbone.js tutorial series.

    Part 1

    The Part 1 of the article gives a simple introduction of the framework Backbone.js, showing a simple “Hello World” View.

    Permalink: http://blog.fernandomantoan.com/serie-backbone-js-parte-1-introducao/

    Part 2

    This part is focused on introducing and explaining the Backbone.View class, showing practical samples and a full functional code using templates and events.

    Permalink: http://blog.fernandomantoan.com/serie-backbone-js-parte-2-view/

    Part 3

    The Part 3 of the series shows how to use Backbone.Model, all of its functionality and an integration of the Model with the View created in Part 2. A simple Sinatra backend is written, to show how Backbone.Model uses a RESTful API with the Backbone.sync() methods to make persistence of data and syncronization.

    Permalink: http://blog.fernandomantoan.com/serie-backbone-js-parte-3-model/

    Part 4

    This part shows the Backbone.Collection class, and increments the simple blog developed through the series, showing how to build a list of the Posts, how to iterate and use Backbone.Collection with all the Array functionalities of Underscore.js. A new View is built to support the blog posts listing and some other resources are implemented, also the Sinatra backend is modified to add the Posts listing support.

    Permalink: http://blog.fernandomantoan.com/serie-backbone-js-parte-4-collection/

    Part 5

    The Part 5 of the series is the last one introducing features and components of the framework. It will show the Backbone.Router class along with Backbone.History, adding some routes to the blog developed and the usage of the pushState HTML5 API. Also the Backbone.sync function is presented, with some examples of customization, and some more details about the events API of the framework.

    Permalink: http://blog.fernandomantoan.com/serie-backbone-js-parte-5-router-historico-backbone-sync-eventos-e-mais/

    Part 6

    This is the last part of the Backbone.js tutorial series, focusing in a deep practical experience on how to build an application from scratch, using good practices, other JS libraries and a PHP backend. Some topics covered:

    • Laravel Framework
    • Models
    • Views
    • Underscore.js
    • Require.js
    • JWT Authentication
    • Twitter Bootstrap.

    Permalink: N/A

    Visit original content creator repository
    https://github.com/fernandomantoan/backbone-tutorial-series

  • adeia

    Adeia

    An discretionary authorization gem for Rails that allows you to have the complete control of your app.

    Requirements

    Requires a User model with:

    • An method name, returning the name of the user.
    • A column remember_token, containing a generated token used for the authentification.
    rails g model User name:string remember_token:string
    

    Installation

    Add this line to your application’s Gemfile:

    gem 'adeia'

    And then execute:

    $ bundle
    

    Or install it yourself as:

    $ gem install adeia
    

    Then include the engine’s routes in your routes.rb. The URL on which you mount the engine is up to you.

    # routes.rb
    
    mount Adeia::Engine => "/adeia"

    Finally copy the migrations by running rake adeia:install:migrations in your terminal.

    Tasks

    The first task to run is rake adeia:permissions elements="first_element, second_element". It creates the given elements in the database and a superadmin group which has all the permissions.
    Then you can run rake adeia:superuser user_id=your_id, which add the given user in the superadmin group.
    If you need to add new groups, run rake adeia:groups groups="first_group, second_group".

    For example:

    rake adeia:permissions elements="admin/galleries, admin/articles, admin/categories"
    rake adeia:superuser user_id=59
    

    Documentation

    Authentification

    Adeia provides methods to sign in and out, to get or set the current user and to check if a user is signed in.

    # sign in an user
    sign_in @user
    # alternatively, sign in permanently
    sign_in @user, permanent: true
    
    # get and set the connected user
    current_user # => #<User>
    current_user = @an_other_user
    
    # check if the user is signed in
    if signed_in?
      # Do stuff
    end

    Authorization

    There are four different authorization methods at action-level.

    require_login! checks if the user is signed in. It raises the exception LoginRequired if not.

    def index
      require_login!
      @events = Event.all
    end

    authorize! checks if the user has the permissions to access the action. It raises AccessDenied if not.

    def new
      authorize!
      @event = Event.new
    end

    load_and_authorize! loads the suitable record and checks if the user has the permissions to access the action, taking into account the loaded record. It raises AccessDenied if not.
    The method returns the record, but it also automatically set an instance variable named after the model.

    def edit
      @event = load_and_authorize!
      # assignation is optional here
    end

    authorize_and_load_records! loads the records taking into account the user’s permissions. It raises AccessDenied if the user hasn’t access to any records.

    def index
      @events = authorize_and_load_records!
      # assignation is optional here
    end

    By default, each method (except require_login!) uses the following parameters:

    • controller: the controller’s name
    • action: the action’s name
    • token: GET parameter token
    • resource: fetch the resource from controller’s name

    You can override those parameters when invoking the method:

    def index
      authorize!(controller: 'events', action: 'new')
    end

    Adeia also provide controller-level methods to keep your code DRY.

    require_login adds the require_login! method to the controller’s actions.

    load_and_authorize adds the suitable methods to the controller’s actions:

    • index: authorize_and_load_records!
    • show, edit, update, destroy: load_and_authorize!
    • new, create, other actions: authorize!

    The two controller-level methods accepts the restricting parameters only and except.

    class EventsController < ApplicationController
    
      require_login only: [:postpone]
      load_and_authorize, except: [:postpone]
    
      def index; end
    
      def new; end
    
      def create; end
    
      def postpone; end
    
    end

    Controller methods

    When an authorization exception is raised by the engine, it automatically store the current user’s location in a cookie. The called method is store_location and is available in your controllers. Then you can use the method redirect_back_or(default, message = nil) which either redirects to the stored location if any or redirects the default provided path, with an optional message.

    Model methods

    TODO

    • User model
    • Permission model

    Visit original content creator repository
    https://github.com/doxa-tech/adeia

  • ode

    ode

    OSF Development Environment IBM modified

    ODE has tooling for building projects written in C, C++, and Java.

    It is used for building on many platforms, including Linux and Windows.

    As coded, you will need to install the ksh package to run an ODE build.

    You need to set TOOLSBASE environment variable to the directory where the ODE binaries can be found

    export TOOLSBASE=${HOME}/eclipse-workspace/ode/ODE/inst.images/x86_linux_2/bin/
    

    You need to set PATH environment variable to the directory where the ODE binaries can be found

    export PATH=${HOME}/eclipse-workspace/ode/ODE/inst.images/x86_linux_2/bin/:$PATH
    

    You need to set LD_LIBRARY_PATH to the directory where the library lib0500.so can be found

    export LD_LIBRARY_PATH=${HOME}/eclipse-workspace/ode/ODE/inst.images/x86_linux_2/bin/:$LD_LIBRARY_PATH
    

    This package includes assistance for bootstrapping for linux in directory bootstrap; there is a script bootstrap/bin/bootstrap_linux which should bootstrap for linux on whatever processor architecture you need. This script has been tested on Fedora 38 and RHEL8, both on x86_64. Part of the bootstrap process writes to ~/.sandboxrc ; if you want to retry the bootstrap process you should remove or rename this file.

    There is a log of a successful bootstrap run in bootstrap/bootstrap_log.txt

    The resulting installable file tree is in ODE/inst.images/x86_linux_2 .

    A prebuilt ODE for x86_linux_2 is also supplied.
    The files under bootstrap/prebuilt/fedora38 were built by Fedora 38, and the
    files under bootstrap/prebuilt/rhel8 were built by Red Hat Enterprise Linux 8. It should not be necessary to use this prebuilt ODE as the bootstrap_linux should succeed.

    There is a MakeMake Java tool in sandbox bbexample which is useful to
    make an initial set of ODE makefiles for a project which has previously been
    built with another build system. This builds if you run bootstrap/bin/build_bbexample, and you can run it with a command such as

    tjcw:classes$ pwd
    /home/tjcw/eclipse-workspace/ode/ODE/src/bbexample/export/classes
    tjcw:classes$ java -cp MakeMake.jar COM.ibm.makemake.bin.MakeMake -sub /home/tjcw/eclipse-workspace/BlueMatter/svntrunk/src
    tjcw:classes$
    

    . Its usage message is

    tjcw:classes$ java -cp MakeMake.jar COM.ibm.makemake.bin.MakeMake  -usage
    Usage: MakeMake [options] <path>
           path: The root directory of the source tree.
           options:
               -mfname <makefile name>
               -sub
               -clobber
               -info -usage -version -rev
    tjcw:classes$
    

    A sample project buildable with ODE is available with git clone git@github.com:IBM/BlueMatter.git

    Make a sandbox with a command like

    mksb -back ${HOME}/eclipse-workspace/ode/ODE/ -dir ${HOME}/eclipse-workspace/BlueMatter/  -m x86_linux_2 svntrunk
    

    Select the sandbox with

    workon -sb svntrunk
    

    Attempt the build with

    TRY_LINUX_BUILD_ANYWAY=1 build -k
    

    Visit original content creator repository
    https://github.com/IBM/ode

  • Projek_Game_Platformer_I-Nootropia

    Platformer (2D) Game I-Nootropia

    I-Nootropia is a game platformer inspired by game “bounce tales” on Symbian OS, but on this game is running specifically on Android OS with minimum Android version is 7.0 (Nougat). Not only that, on this game implemented a concept of gameplay with math inside it. The math in this case is mean a basic of calculus which includes basic of limits, basic of Derivative, and basic of Integral. The game would be go to next level if the player have been success (Of course all answer are correct) clear all the calculus case. With this game hope’s it can be prevent someone from experiencing brain fog (a condition where someone experiences feelings of confusion, forgetfulness, and reduced concentration and clarity of thought).

    Methodology

    On this development game, Multimedia Development Life Cycle is used for the methodology because it has clear steps, focuses on using multimedia, allows testing and changes, and ensures the game is fun and easy to use.


    Based on that picture the steps is only until Testing , which is:

    1. Concept
    Observing the STT Wastukancana campus to create a fun and productive game concept.

    2. Material Collecting
    Collecting primary data through observation and secondary data like multimedia content for the game.

    3. Design
    Creating use case, activity, and class diagrams to plan the game’s structure, look, and requirements.

    4. Assembly
    The game is built using Unity to allow interaction with users from various groups.

    5. Testing
    Testing the game’s displays: main menu, guide, environment, game over, pause, and finish.

    Game Overview


    All the features of the game I-Nootropia are working well, including the math challenge buttons, progress information during gameplay, and the display.

    Visit original content creator repository https://github.com/Andika-Aditya/Projek_Game_Platformer_I-Nootropia
  • nextjs-meals-searcher

    Next.js Meals Searcher

    Next.js Meals Searcher is a dynamic web application built with Next.js that allows users to search for and explore a variety of meal recipes. Leveraging the power of Next.js for server-side rendering and static site generation, this app delivers a fast and responsive user experience.

    Features

    • Meal Search: Quickly search for meals using keywords.
    • Meal Details: View detailed information about each meal, including ingredients, instructions, and more.
    • Search with URL Parameters: Implement search functionality using URL search parameters instead of useState, allowing direct linking to specific searches.
    • Responsive Design: Fully responsive design optimized for different devices.

    Technologies Used

    • Next.js: React framework for server-side rendering and static site generation.
    • TypeScript: Typed JavaScript for better development experience.
    • Tailwind CSS: Utility-first CSS framework for styling.
    • API Integration: Fetches meal data from an external API.

    Getting Started

    Prerequisites

    • Node.js and npm/yarn/pnpm/bun installed on your machine.

    Installation

    1. Clone the repository:

      git clone https://github.com/pnvdev/nextjs-meals-searcher.git
      cd nextjs-meals-searcher
    2. Install dependencies:

      npm install
      # or
      yarn install
      # or
      pnpm install
      # or
      bun install
    3. Running the Development Server:

      npm run dev
       # or
       yarn dev
       # or
       pnpm dev
       # or
       bun dev

    Open http://localhost:3000 in your browser to see the app.

    Learn More

    Deployment

    The easiest way to deploy your Next.js app is to use the Vercel Platform.

    Contributing

    Contributions, issues, and feature requests are welcome! Feel free to check the issues page.

    License

    This project is open-source and available under the MIT License.

    Visit original content creator repository
    https://github.com/pnvdev/nextjs-meals-searcher