MoreRSS

site iconShinChvenModify

A full-stack TypeScript/JavaScript web developer, and also build mobile apps.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of ShinChven

Uncensored LLMs on Ollama

2025-01-16 01:19:02

Introduction

Large language models (LLMs) have revolutionized how we interact with technology. While many LLMs are censored for safety and ethical reasons, uncensored LLMs offer more freedom. Ollama, an open-source platform, allows users to run these models locally. This post explores the world of uncensored LLMs on Ollama, examining their capabilities, limitations, and potential benefits and risks.

Uncensored LLMs on Ollama

Ollama offers a variety of uncensored LLMs, each with unique characteristics. Here are some notable ones:

  • Llama 2 Uncensored: Based on Meta's Llama 2, this model comes in 7B and 70B parameter sizes. It has double the context length of the original Llama 2 .
  • WizardLM Uncensored: This 13B parameter model, based on Llama 2, was uncensored by Eric Hartford .
  • Wizard Vicuna Uncensored: This model, also by Eric Hartford, is available in 7B, 13B, and 30B parameter sizes .
  • Dolphin-Mistral: Created by Eric Hartford, this model is based on Mistral and is known for handling sensitive topics .
  • Dolphincoder: Fine-tuned from Mixtral 8x7b by Eric Hartford, this model excels at code generation .
  • Nous Hermes Llama 2 13B: This uncensored model, based on Llama 2, is known for long responses and a lower hallucination rate .

Ollama also offers other uncensored LLMs, providing users with diverse options .

Capabilities and Limitations

Uncensored LLMs have unique capabilities but also limitations:

Capabilities

  • Enhanced Creativity: Uncensored models can generate more diverse and creative text formats .
  • Open Responses: These models provide honest and unbiased responses, even to sensitive questions .
  • Improved Accuracy: In certain domains, uncensored models may be more accurate .

Limitations

  • Harmful Content: Uncensored models may produce harmful or offensive outputs .
  • Misinformation and Bias: These models may generate misinformation or perpetuate biases .
  • Ethical Concerns: The use of uncensored LLMs raises ethical concerns .

It's important to recognize the trade-off between freedom of expression and responsible AI use .

Comparison of Uncensored LLMs

Here's a comparison of some key models on Ollama:

Model Size (Parameters) Capabilities Intended Use Cases
Llama 2 Uncensored 7B, 70B General purpose Content creation, code generation, research
WizardLM Uncensored 13B General purpose Content creation, research, dialogue generation
Wizard Vicuna Uncensored 7B, 13B, 30B General purpose Content creation, code generation, research
Dolphin-Mistral 7B Deeper conversations Exploring complex ideas, addressing sensitive issues
Dolphincoder 8x7B Code generation Software development, code completion
Nous Hermes Llama 2 13B 13B Long responses Instruction following, question answering

Consider factors like model size, capabilities, and intended use cases when choosing a model .

Examples of Use

Uncensored LLMs can be used for:

  • Content Creation: Writing stories, poems, code, etc..
  • Code Generation: Generating code in various programming languages .
  • Research and Analysis: Exploring controversial topics and gathering diverse perspectives .
  • Education and Training: Creating interactive learning experiences .

Benefits and Risks

Benefits:

  • Privacy and Security: Enhanced privacy and security when running models locally .
  • Increased Efficiency: Reduced inference time compared to cloud-based platforms .
  • Cost Savings: Lower costs associated with cloud computing .
  • Customization: Greater control over models and their use.

Risks:

  • Security Vulnerabilities: Potential security risks .
  • Resource Requirements: May require significant computational resources .
  • Maintenance: Users are responsible for maintenance and updates.

The availability of uncensored LLMs raises questions about acceptable content and highlights the need for ethical considerations .

Conclusion

Uncensored LLMs on Ollama offer a unique opportunity to explore language models without censorship. Use them responsibly and be aware of their limitations and risks. Mitigate risks by critically evaluating content, verifying information, and using the models ethically.

The future of uncensored LLMs involves discussions about AI safety, ethics, and societal impact. Finding a balance between innovation and responsible development is crucial. By understanding the capabilities, limitations, and ethical considerations, users can leverage these tools effectively and contribute to responsible AI development.

References

https://ollama.ai/library/llama2-uncensored https://ollama.ai/library/wizardlm-uncensored https://ollama.ai/library/wizard-vicuna-uncensored https://ollama.ai/library/dolphin-mistral https://ollama.ai/library/dolphincoder https://ollama.ai/library/nous-hermes-llama2-13b https://ollama.ai/library https://bdtechtalks.com/2023/07/17/uncensored-large-language-models/ https://venturebeat.com/ai/why-researchers-are-building-uncensored-large-language-models/ https://www.theguardian.com/technology/2023/jul/21/ai-chatbots-uncensored-language-models-danger https://huggingface.co/TheBloke/Llama-2-7B-Uncensored-GGML https://github.com/facebookresearch/llama https://www.timeshighereducation.com/campus/how-prepare-students-age-generative-ai https://ollama.ai/ https://www.fastcompany.com/90884902/meta-is-releasing-its-powerful-llama-2-ai-language-model-for-free https://www.bleepingcomputer.com/news/security/researchers-find-ways-to-jailbreak-metas-llama-2-large-language-model/ https://arstechnica.com/information-technology/2023/07/meta-releases-open-source-ai-model-llama-2-that-can-rival-googles-and-openais/

Mastering Styles in Microsoft Word

2025-01-16 01:02:00

Introduction

Tired of manually formatting every heading, paragraph, and quote in your Word documents? Microsoft Word's Styles feature can be a huge time-saver! Styles allow you to apply a set of formatting options with a single click, ensuring consistency and making your documents look polished and professional. This blog post will guide you through the essentials of using Styles.

What are Styles?

Think of Styles as pre-defined formatting templates. They contain a combination of formatting characteristics like font type, size, color, spacing, and more. Instead of manually adjusting each of these elements every time, you simply apply a Style, and Word takes care of the rest.

Using Built-in Styles

Word comes with a variety of built-in Styles. You can find them on the Home tab in the Styles group.

  • Heading Styles: (Heading 1, Heading 2, etc.) Use these for your document headings to create a clear structure.
  • Paragraph Styles: (Normal, No Spacing, etc.) Control the appearance of your paragraphs.
  • Character Styles: (Emphasis, Strong, etc.) Apply unique formatting to specific words or phrases within a paragraph.

To apply a Style, simply select the text you want to format and click on the desired Style in the Styles group.

Creating Custom Styles

While built-in Styles are a great starting point, you can create your own custom Styles to match your specific needs.

  1. Format a piece of text exactly the way you want it to appear.
  2. Select the formatted text.
  3. In the Styles group, click the More button (the small arrow pointing downwards).
  4. Click Create a Style.
  5. Give your Style a name and click OK.

Your new Style will now appear in the Styles group, ready to be used!

Modifying Existing Styles

You can also modify existing Styles, including the built-in ones:

  1. Right-click on the Style you want to modify in the Styles group.
  2. Select Modify.
  3. Make the desired changes to the formatting options.
  4. Click OK.

Why Use Styles?

  • Consistency: Ensure a uniform look throughout your document.
  • Efficiency: Save time by applying multiple formatting options with a single click.
  • Easy Updates: Change the formatting of an entire document by simply modifying a Style.
  • Navigation: Styles (especially heading styles) are essential for using the Navigation Pane effectively.

Conclusion

Mastering Styles in Word is a key step towards creating professional-looking documents with ease. By utilizing built-in Styles, creating custom Styles, and modifying existing ones, you can take control of your document formatting and significantly improve your workflow.

Connect Google Analytics with Google AdSense

2025-01-16 00:25:12

Introduction

Want to know how your website content is performing in terms of ad revenue? Linking Google Analytics with Google AdSense is the answer! This connection provides valuable insights into user behavior and helps optimize your monetization strategy.

Prerequisites

Before you start, ensure you have:

  • Active Google Analytics and Google AdSense accounts.
  • The same email address for both accounts.
  • "Administrator" access to your AdSense account.
  • "Edit" permission on the Google Analytics property.

Steps to Link Accounts

  1. Sign in to AdSense: Visit the AdSense website and sign in.
  2. Access Account Settings: In the left-hand navigation menu, click on "Account."
  3. Navigate to Integrations: Select "Access and authorization," then click on "Google Analytics integration."
  4. Manage Links: You'll be directed to the "Manage your Google Analytics links" page. Here, you can view, create, and delete links between your accounts.
  5. Create a New Link: Click the "+ New Link" button.
  6. Select Property: Choose the Google Analytics property you want to link from the list.
  7. Confirm Link: Click "Create link."

Important Notes

  • Time Delay: It might take up to 24 hours for AdSense data to appear in your Google Analytics account.
  • GA4: These instructions are for connecting AdSense to a Google Analytics 4 property. The steps may differ slightly for Universal Analytics properties.
  • Multiple Accounts: You can link multiple GA4 properties to a single AdSense account, and vice versa.

Benefits of Linking

Linking Google Analytics and AdSense offers several benefits:

  • Understand User Behavior: Analyze how users who click on your ads interact with your website.
  • Optimize Ad Placements: Identify the most effective locations on your site for ad units.
  • Improve Content Strategy: Discover which content generates the most ad revenue and refine your content strategy accordingly.
  • Track Key Metrics: Monitor essential metrics like page RPM (revenue per thousand pageviews), CTR (click-through rate), and ad impressions.

By linking these powerful tools, you gain valuable data that can significantly improve your website's monetization strategy.

Deploy ComfyUI on Google Colab

2025-01-16 00:13:20

Introduction

ComfyUI is a powerful and modular GUI and backend for working with diffusion models, particularly Stable Diffusion. It offers a node-based interface for creating complex image generation workflows without coding. This guide will show you how to set up and run ComfyUI on Google Colab.

Setting Up the Environment

First, we need to set up the environment on Google Colab. You can choose to use Google Drive for storage or just rely on Colab's temporary storage.

Optional: Mount Google Drive

If you want to save your models and outputs persistently, you can mount your Google Drive. This is useful for larger models or if you want to access your data later.

# Set USE_GOOGLE_DRIVE to True
USE_GOOGLE_DRIVE = True

if USE_GOOGLE_DRIVE:
    print("Mounting Google Drive...")
    from google.colab import drive
    drive.mount('/content/drive')
    WORKSPACE = "/content/drive/MyDrive/ComfyUI"
    %cd /content/drive/MyDrive
else:
    WORKSPACE = 'ComfyUI'

Clone ComfyUI Repository

Now, clone the ComfyUI repository from GitHub.

![ ! -d $WORKSPACE ] && echo -= Initial setup ComfyUI =- && git clone https://github.com/comfyanonymous/ComfyUI
%cd $WORKSPACE

Update ComfyUI (Optional)

If you want to ensure you have the latest version, pull the latest changes.

# Set UPDATE_COMFY_UI to True
UPDATE_COMFY_UI = True

if UPDATE_COMFY_UI:
  print("-= Updating ComfyUI =-")
  !git pull

Install Dependencies

Install the required Python packages.

print("-= Install dependencies =-")
!pip install -r requirements.txt

Note: Colab may have certain torch versions pre-installed. You can see these versions by running !pip list | grep torch. When installing requirements, you may need to use the --extra-index-url flag to specify a torch version compatible with your system.

For example:

!pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu121 --extra-index-url https://download.pytorch.org/whl/cu118 --extra-index-url https://download.pytorch.org/whl/cu117

Downloading Models and Other Resources

You'll need to download some models, checkpoints, and other resources to get started. Here's how to download a basic set of models, including Stable Diffusion 1.5:

Stable Diffusion 1.5

!wget -c https://huggingface.co/Comfy-Org/stable-diffusion-v1-5-archive/resolve/main/v1-5-pruned-emaonly-fp16.safetensors -P ./models/checkpoints/

VAE

!wget -c https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors -P ./models/vae/

You can uncomment other commands in the provided Jupyter notebook to download additional models like SDXL, ControlNet, etc.

Running ComfyUI

There are several ways to run ComfyUI on Colab. We recommend using cloudflared for the best experience.

Using Cloudflared

Cloudflared creates a secure tunnel to your Colab instance, allowing you to access the ComfyUI web interface easily.

  1. Install Cloudflared:

    !wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
    !dpkg -i cloudflared-linux-amd64.deb
    
  2. Run ComfyUI with Cloudflared:

    import subprocess
    import threading
    import time
    import socket
    import urllib.request
    
    def iframe_thread(port):
      while True:
          time.sleep(0.5)
          sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
          result = sock.connect_ex(('127.0.0.1', port))
          if result == 0:
            break
          sock.close()
      print("\nComfyUI finished loading, trying to launch cloudflared (if it gets stuck here cloudflared is having issues)\n")
    
      p = subprocess.Popen(["cloudflared", "tunnel", "--url", "http://127.0.0.1:{}".format(port)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
      for line in p.stderr:
        l = line.decode()
        if "trycloudflare.com " in l:
          print("This is the URL to access ComfyUI:", l[l.find("http"):], end='')
        #print(l, end='')
    
    threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()
    
    !python main.py --dont-print-server
    

    This will output a trycloudflare.com URL that you can use to access the ComfyUI interface in your browser.

Alternative Methods

The Jupyter notebook also provides instructions for running ComfyUI with localtunnel or using Colab's iframe. However, cloudflared is generally the most reliable method.

Conclusion

That's it! You've successfully set up and deployed ComfyUI on Google Colab. You can now start experimenting with different models and creating your own unique image generation workflows. Remember to consult the ComfyUI Examples page for inspiration and guidance on using the various features of ComfyUI.

Reuse AUTOMATIC1111 Stable Diffusion WebUI Models in ComfyUI

2025-01-15 23:28:35

Introduction

ComfyUI is a powerful and modular GUI for working with diffusion models, offering a node-based interface for creating complex workflows. If you've been using AUTOMATIC1111's Stable Diffusion web UI (A1111) and have a collection of models, you might be wondering if you can use them in ComfyUI without downloading them again. The good news is, yes you can!

ComfyUI provides a straightforward way to share models with other diffusion UIs like A1111 through its configuration file.

The extra_model_paths.yaml File

ComfyUI uses a configuration file named extra_model_paths.yaml to define search paths for various model types. This file allows you to specify directories where ComfyUI should look for models, including those already used by A1111.

Steps to Share Models

  1. Locate the File:

    • In the standalone Windows build of ComfyUI, you'll find extra_model_paths.yaml.example in the main ComfyUI directory.
    • For other installations, the file is also located in the root of the ComfyUI repository.
  2. Rename the File: Rename extra_model_paths.yaml.example to extra_model_paths.yaml.

  3. Edit the File: Open extra_model_paths.yaml with a text editor.

  4. Configure Paths: The file contains example configurations for different model types (checkpoints, VAEs, LoRAs, etc.). You need to modify these paths to point to the corresponding directories within your A1111 installation.

    For example, if your A1111 checkpoints are located in stable-diffusion-webui/models/Stable-diffusion, your configuration might look like this:

    a1111:
      base_path: /path/to/your/stable-diffusion-webui # Replace with the actual path
    
      checkpoints: models/Stable-diffusion
      configs: models/Stable-diffusion
      vae: models/VAE
      loras: |
        models/Lora
        models/LyCORIS
      upscale_models: |
         models/ESRGAN
         models/SwinIR
      embeddings: embeddings
      hypernetworks: models/hypernetworks
      controlnet: models/ControlNet
    

    Important:

    • Replace /path/to/your/stable-diffusion-webui with the actual path to your A1111 installation directory.
    • Adjust the sub-paths (e.g., models/Stable-diffusion, models/VAE) according to your A1111 directory structure.
    • The | symbol is used for multi-line strings in YAML, allowing you to specify multiple directories for a single model type.
  5. Save and Restart: Save the changes to extra_model_paths.yaml and restart ComfyUI.

After restarting, ComfyUI will now be able to load models from the directories you specified, including those from your A1111 installation. This means you can seamlessly use your existing model collection in ComfyUI without any redundant downloads or storage usage. Enjoy experimenting with your A1111 models in the flexible environment of ComfyUI!

Installing ComfyUI on Windows for AMD GPUs

2025-01-15 23:22:28

ComfyUI is a powerful and modular GUI that allows you to design and execute advanced stable diffusion pipelines using a graph-based interface. This blog post will guide you through the process of installing ComfyUI on a Windows system with an AMD GPU.

Prerequisites

Before we begin, make sure you have the following:

  • A Windows operating system.
  • An AMD GPU. Note that some older models might require specific driver configurations.
  • 7-Zip for extracting the ComfyUI archive (if you're using the portable version).
  • Python installed (version 3.12 is recommended).

Installation Methods

There are two primary methods for installing ComfyUI on Windows with an AMD GPU:

  1. Using the Portable Standalone Build (Easiest)
  2. Manual Installation (For More Control)

Method 1: Using the Portable Standalone Build

This is the simplest method, especially if you're new to ComfyUI or prefer a quick setup.

  1. Download: Go to the ComfyUI releases page and download the latest portable build for Windows (e.g., ComfyUI_windows_portable_nvidia.7z). Even though the file name mentions Nvidia, it will still work for AMD GPUs if you follow the DirectML instructions below.
  2. Extract: Use 7-Zip to extract the downloaded archive to your desired location.
  3. Install Torch DirectML: Open a command prompt or PowerShell window in the extracted ComfyUI folder and run the following command: bash pip install torch-directml
  4. Run ComfyUI: bash python main.py --directml
  5. Place Models: Before running, place your Stable Diffusion checkpoints (the large .ckpt or .safetensors files) in the ComfyUI\models\checkpoints directory.

That's it! ComfyUI should now be running. You can access the interface through your web browser, typically at http://127.0.0.1:8188.

Method 2: Manual Installation

This method gives you more control over the installation process but requires a few more steps.

  1. Clone the Repository: Open your terminal or command prompt and clone the ComfyUI repository: bash git clone https://github.com/comfyanonymous/ComfyUI.git
  2. Navigate to the Directory: bash cd ComfyUI
  3. Install Dependencies: bash pip install -r requirements.txt
  4. Install Torch DirectML: bash pip install torch-directml
  5. Place Models: Put your Stable Diffusion checkpoints in models/checkpoints and your VAE models in models/vae.
  6. Run ComfyUI: bash python main.py --directml

Note for Specific AMD GPUs:

  • If you encounter issues with your specific AMD card model, you might need to use an override command:
    • For 6700, 6600, and some older RDNA2 cards: HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py --directml
    • For 7600 and some RDNA3 cards: HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py --directml

Sharing Models with Other UIs

If you have another Stable Diffusion UI installed (like Automatic1111) and want to share models with ComfyUI to save disk space, you can modify the model search paths:

  1. Rename: Find the extra_model_paths.yaml.example file in the ComfyUI directory and rename it to extra_model_paths.yaml.
  2. Edit: Open the extra_model_paths.yaml file with a text editor.
  3. Configure Paths: Modify the paths in the file to point to the model directories of your other UI.

Conclusion

You have now successfully installed ComfyUI on your Windows system with an AMD GPU. You can start exploring the powerful features of ComfyUI and create complex Stable Diffusion workflows. For examples and inspiration, visit the ComfyUI Examples page. Remember that the ComfyUI community is active and helpful, so don't hesitate to seek support on the Matrix space or Comfy.org if you encounter any issues. Happy creating!