Archive for the ‘ Administration ’ Category

MySQL :: White Papers – MySQL Cluster

MySQL :: White Papers – MySQL Cluster.

MySQL

  • Scaling Web Services with MySQL Cluster: An Alternative to the MySQL Memory Storage Engine While effective for smaller, “read-mostly” deployments, as web services evolve users of the MySQL MEMORY storage engine can find themselves confronting issues in scalability, concurrency and availability.
    MEMORY is a basic MySQL storage engine designed for in-memory operations. MySQL Cluster, which itself can be implemented as a MySQL storage engine, can perform all the same in-memory operations, and is faster, more reliable and uses less RAM for data, even on a single node.
    In performance testing, MySQL Cluster was able to deliver 30x higher throughput with 1/3rd the latency of the MEMORY storage engine on just a single node.
    MySQL Cluster can be configured and run in the same way as MEMORY (ie on a single host with no replication and no persistence), but then any of these high availability and scalability attributes can be added in any combination as the workload evolves. The database can be scaled across multiple nodes without implementing sharding (partitioning) in the application, significantly reducing both cost and complexity.
    In this whitepaper, comparisons between the MEMORY storage engine and MySQL Cluster are presented, including a performance study of the two technologies, before then providing a step-by-step guide on how existing Memory storage engine users can upgrade to MySQL Cluster.
    Read more »
  • Using MySQL Cluster 7.1 for Web and eCommerce Applications As your on-line services expand, so too can the demands on your web infrastructure. Challenges include:
    • Growing revenue streams and customer loyalty from your web and eCommerce applications
    • The need for continuous application availability and real time responsiveness to ensure a quality customer experience
    • Constant pace of innovation to quickly deliver compelling new services to market

    MySQL Cluster is a proven key component of web infrastructure that can help you cost-effectively deploy online applications to generate new revenue streams and build vibrant user communities.
    Read the white paper to learn how deploying MySQL Cluster with your web and eCommerce services enables you to grow revenue and enhance customer loyalty.

    Read more »

  • Building an Open Source, Carrier Grade Platform for Data Management with MySQL Cluster Whether Service Providers are looking to deploy new Web/Telco 2.0 applications to mobile internet users or consolidating subscriber data within the network to support greater service personalization and targeted advertising, the database plays a key enabling role.
    With the rapid shift from closed, expensive and proprietary technology, MySQL has grown to become the world’s most popular open source database. In this paper we explore how an open source carrier grade platform architecture is able to cost-effectively meet the communication industry’s high availability, scalability and real-time performance requirements.
    Read more »
  • MySQL Cluster 7.1: Architecture and New Features Whitepaper MySQL Cluster has been widely adopted for a range of telecommunications, web and enterprise workloads demanding carrier-grade availability with high transaction throughput and low latency. In this paper we will explore new features introduced in MySQL Cluster 7.1 to meet an ever expanding and more demanding set of mission-critical data management requirements.Read more »
  • MySQL Cluster Evaluation Guide – Designing, Evaluating and Benchmarking MySQL Cluster In this whitepaper learn the fundamentals of how to design and select the proper components for a successful MySQL Cluster evaluation. We explore hardware, networking and software requirements, and work through basic functional testing and evaluation best-practices.Read more »
  • Guide to Optimizing Performance of the MySQL Cluster Database This guide explores how to tune and optimize the MySQL Cluster database to handle diverse workload requirements. It discusses data access patterns and how to build distribution awareness into applications, before exploring schema and query optimization, tuning of parameters and how to get the best out of the latest innovations in hardware design.
    The Guide concludes with recent performance benchmarks conducted with the MySQL Cluster database, an overview of how MySQL Cluster can be integrated with other MySQL storage engines, before summarizing additional resources that will enable you to optimize MySQL Cluster performance with your applications.
    Read more »

opscode’s chef development environment

DEVELOPMENT:

Before working on the code, if you plan to contribute your changes, you need to read the Opscode Contributing document.

You will also need to set up the repository with the appropriate branches. We document the process on the Chef Wiki.

Once your repository is set up, you can start working on the code. We do use BDD/TDD with RSpec and Cucumber, so you’ll need to get a development environment running.

ENVIRONMENT:

In order to have a development environment where changes to the Chef code can be tested, we’ll need to install a few things after setting up the Git repository.

Requirements:

Install these via your platform’s preferred method; for example apt, yum, ports, emerge, etc.

  • Git
  • CouchDB
  • libxml2 development package (for webrat)
  • libxslt develoment package (for webrat)

Install the following RubyGems.

  • ohai
  • rake
  • rspec
  • cucumber
  • webrat
  • merb-core
  • roman-merb_cucumber

Ohai is also by Opscode and available on GitHub, github.com/opscode/ohai/tree/master.

roman-merb_cucumber is available from GitHub:

  gem install --source http://gems.github.com/ roman-merb_cucumber

Starting the Environment:

Once everything is installed, run the dev:features rake task. Since the features do integration testing, root access is required.

  sudo rake dev:features

The dev:features task:

  • Installs chef, chef-server, chef-server-slice gems. It will fail if required gems above are missing.
  • Starts chef-server on ports 4000 and 4001.
  • Starts chef-indexer.
  • Starts CouchDB on port 5984.
  • Starts the stompserver on port 61613.

You’ll know its running when you see:

   ~ Activating slice 'ChefServerSlice' ...
  merb : worker (port 4000) ~ Starting Mongrel at port 4000
  merb : worker (port 4000) ~ Successfully bound to port 4000
  merb : worker (port 4001) ~ Starting Mongrel at port 4001
  merb : worker (port 4001) ~ Successfully bound to port 4001

You’ll want to leave this terminal running the dev environment.

Web Interface:

With the dev environment running, you can now access the web interface via localhost:4000/. Supply an OpenID to log in.

Spec testing:

We use RSpec for unit/spec tests.

  rake spec

This doesn’t actually use the development environment, because it does the testing on all the Chef internals. For integration/usage testing, we use Cucumber features.

Integration testing:

We test integration with Cucumber. The available feature tests are rake tasks:

  rake features                            # Run Features with Cucumber
  rake features:api                        # Run Features with Cucumber
  rake features:api:nodes                  # Run Features with Cucumber
  rake features:api:nodes:create           # Run Features with Cucumber
  rake features:api:nodes:delete           # Run Features with Cucumber
  rake features:api:nodes:list             # Run Features with Cucumber
  rake features:api:nodes:show             # Run Features with Cucumber
  rake features:api:nodes:update           # Run Features with Cucumber
  rake features:api:roles                  # Run Features with Cucumber
  rake features:api:roles:create           # Run Features with Cucumber
  rake features:api:roles:delete           # Run Features with Cucumber
  rake features:api:roles:list             # Run Features with Cucumber
  rake features:api:roles:show             # Run Features with Cucumber
  rake features:api:roles:update           # Run Features with Cucumber
  rake features:client                     # Run Features with Cucumber
  rake features:language                   # Run Features with Cucumber
  rake features:language:recipe_include    # Run Features with Cucumber
  rake features:provider:package:macports  # Run Features with Cucumber
  rake features:provider:remote_file       # Run Features with Cucumber
  rake features:search                     # Run Features with Cucumber

Git Reference

Git Reference.

Introduction to the Git Reference

This is the Git reference site. This is meant to be a quick reference for learning and remembering the most important and commonly used Git commands. The commands are organized into sections of the type of operation you may be trying to do, and will preset the common options and commands needed to accomplish these common tasks.

Each section will link to the next section, so it can be used as a tutorial. Every page will also link to more in-depth Git documentation such as the offical manual pages and relevant sections in the Pro Git book, so you can learn more about any of the commands. First, we’ll start with thinking about source code management like Git does.

How to Think Like Git

This first thing that is important to understand about Git is that it thinks about version control very differently than Subversion or Perforce or whatever SCM you may be used to. It is often easier to learn Git by trying to forget your assumptions about how version control works and try to think about it in the Git way.

Let’s start from scratch. Assume you are designing a new source code management system. How do you do basic version control before you used a tool for it? Chances are that you simply copied your project directory to save what it looked like at that point.

 $ cp -R project project.bak

That way, you can easily revert files that get messed up later, or see what you have changed by comparing what the project looks like now to what it looked like when you copied it.

If you are really paranoid, you may do this often, maybe putting the date in the name of the backup:

 $ cp -R project project.2010-06-01.bak

In that case, you may have a bunch of snapshots of your project that you can compare and inspect from. You can even use this model to fairly effectively share changes with someone. If you zip up your project at a known state and put it on your website, other developers can download that, change it and send you a patch pretty easily.

 $ wget http://sample.com/project.2010-06-01.zip
 $ unzip project.2010-06-01.zip
 $ cp -R project.2010-06-01 project-my-copy
 $ unzip project.2010-06-01.zip
 $ cd project-my-copy
 $ (change something)
 $ diff project-my-copy project.2010-06-01 > change.patch
 $ (email change.patch)

Now the original developer can apply that patch to their copy of the project and they have your changes. This is how many open source projects have been collaborated on for several years.

This actually works fairly well, so let’s say we want to write a tool to make this basic process faster and easier. Instead of writing a tool that versions each file individually, like Subversion, we would probably write one that makes it easier to store snapshots of our project without having to copy the whole directory each time.

This is essentially what Git is. You tell Git you want to save a snapshot of your project with the git commit command and it basically records a manifest of what all of the files in your project look like at that point. Then most of the commands work with those manifests to see how they differ or pull content out of them, etc.

If you think about Git as a tool for storing and comparing and merging snapshots of your project, it may be easier to understand what is going on and how to do things properly.