Archive

Posts Tagged ‘database’

Xtream Codes IPTV Panel 2.4.2 Review – Part 4: Tutorial to Change the Main Server, Backup & Restore the Database

March 20th, 2017 6 comments

This is the fourth part of a review about Xtream Codes IPTV Panel, software to become your own content provider, and manage streams, clients, and resellers. The first three parts:

  1. Review of Xtream-Codes IPTV Panel Professional Edition – Part 1: Introduction, Initial Setup, Adding Streams…
  2. Xtream Codes IPTV Panel Review – Part 2: Movie Data Editing, Security, Resellers, Users and Pricing Management
  3. Xtream-Codes IPTV Panel Review – Part 3: Updates and New Features for Version 2.4.2

Main Server Change – Part 1: New Server

Changing your Main Server could bring you troubles, if you do not know what you are doing. Many different reasons to change the Main server such as  crashes, new one. making a Load Balancer to be a Main Server…

Remember, it’s all about the existing backup, and you’ll restore your backup later, after successfully changing the Main Server. That is not difficult and everybody can do it. But if you install your backup with your former configurations-Servers etc, it will need some corrections after the backup install is finish, and for example all your Clients, Servers ( and your Old Main Server) will be in again after you install your backup.

I’ll try to explain what that’s all about and show different examples.

  1. Changing the Main Server sounds and looks pretty easy, piece of cake, right?

But let me tell you, it’s not! Be absolutely sure about what you are doing before. We start with “Manage Server” left side in the Panel.

Then you scroll down to the end and you’ll see this:

Click to Enlarge

Warning: Make sure that you took all necessary steps before you press this button, all will be gone in a flash… BEFORE YOU Press MAIN SERVER CHANGE

Step 1: Do a Manuel Backup; this is still possible, since Xtream-Codes thought it would be a good idea to take the manually install Backup-Button off the road, for later installing your FULL BACKUP again by yourself!. NOT A GOOD IDEA, Xtream-Codes!

Step 2: Do a fresh automated backup on Xtream-Servers.

  • Database Manager—Remote Backup Now

Step 3: Take your time to think about your scenario! What do you want to do? Changing to a brand new Main? Formatted the old one only? Making an Existing Load balancer Main now? You see? Different procedures. It needs different preparing on this step of the game.

You have to think about the existing BACKUP, with all your clients and streams in, right? Or you want to loose them? I guess not. Again, think about, that the backup and later installing of your backup contains all your streams, clients etc…but also the old Server configurations.

Have also your ROOT ACCESS from the Panel at hand, because after pressing the CHANGE-Main Button…it happened a few times, that I could not login anymore in the CMS without root user/pass. You can find this Root USER/PASS in your Client Area WHMCS with Xtream-Codes,

Click My SERVICES—-

Press this then after:

And you see all what you need after:

We now change the Main to a NEW ONE:

After pressing the Button

Click to Enlarge

A warning pops up, asking, “Are you Sure”?

Click to Enlarge

We press OK! Seat belt on, and then this should come (In former versions I was simply getting kicked out the CMS, and have to login with Root user/pass only, Admins could not. This seemed to be fixed to a friendlier follow up now…)

Click to Enlarge

Now you put your NEW MAIN’s IP and SSH ROOT Password in, also your MYSQL ROOT Password, if any. If you are not sure, then in most cases simply put your ROOT SSH in again. Also, check the SSH Port, standard is Port 22, except if you changed it before. Don’t care about the rest now; you can change it later easily. Your New MAIN SERVER will install now, if all necessary data’s was right. And after a couple of Minutes, your New Main is installed

Click to Enlarge

After Installation of your NEW Main is finished, you want to restore the backup. Like I said, Xtream-Codes gives you right now (Sure that they will bring it up again) only the choice to install a Remote Backup from Xtream Codes Servers. To do this, you do as follows: Go DATABANK Manager and you see all your Remote Backups

Click to Enlarge

Right click the curved red and green arrow (load Backup), and after you choose your Remote Backup, this shows up:

Click to Enlarge

Press OK, and you see that Databank with all your Values is restored.

NEXT STEP, like I mentioned already above: after your Backup is installed, you will see, that your former OLD MAIN SERVER seems to be installed again! YOU HAVE TO GO ‘EDIT-SERVER…EDIT MAIN… and CORRECT IP AND SSH! This will do it then!.

And after successful installing the new Main (if he is a brand new one) , and installing the Backup then after, you Main will show the former IP of your Main, cause he was still in the installed backup in!

No problem at all, simply go edit Main-Server, insert again IP and SSH Root Password, and right SSH port, commonly Port 22, if you don’t changed it in your New Main.

Click to Enlarge

It also appears that your Load Balancers need a FULL REMAKE

PRESS FULL REMAKE ALL LB’s

AFTER PRESS FULL REMAKE ALL LOAD BALANCERS, HERE WE GO

It is possible, that you have to have to repeat this action with remaking all Load balancers. But be assured, it will work.

This was Part 1 of CHANGING the MAIN SERVER. I guess you know, what follows later, the little bit more complicated procedure of Changing The Main under different scenarios:

  1. We make an existing Load balancer to a NEW MAIN
  2. The preparation to do this

Changing Main Server – Part 2: Convert Load Balancer to Main Server

Scenario: We want a running Load balancer as a new Main Server! The right now “running” Main we want to send in retirement, right? This example requires that you have a few Load Balancers up and running. In case you have only 2 dervers in total, it’s not so complicated at all. Like I said above, each scenario is different. There are Admins with 20 or more Load balancers, and there are small ones, with 2 or 3 in total.

Do exactly Step by Step as follows, no twisting, no turning, no upside down please.. 🙂

Step 1: You go to TOOLS left Side, and transfer temporarily all streams from the future Main Server to another Load balancer of your choice! That’s how I have done it, not sure it will work later if you go another way, letting the streams on the future Main (the Server is a LB right now), but we have to delete this Server later out of the Panel configuration to make him Main! Reason: After we push the Remote-Backup back in, he will appear as a Load balancer again, and we can’t make him a Main Server without deleting him in the Panel.

Go exactly this way right from the start! NO REMOTE BACKUP BEFORE!! (Still hope that Xtream-Codes will implement the old feature back in soon, because not many can handle phpmyadmin, to implement a databank backup by their own later, or they should put a “hint or kind of warning” in before, something like Remote Backup now? Just in case the Admin forget it! Remember also, that the cron job “Remote Backup” is only once each 24 hours, so better do this manually Remote Backup before.)

Click to Enlarge

In general, a good idea in safety reason, the Remote Backup’s, it is a way more safe then before. Here we go now:

Click to Enlarge

After you’ve transferred your Main Server’s streams to another server, check first, the Main is really empty, it will need sometimes 1 or 2 or more minutes, depending of the quantity of your transferred streams. Now we have an empty Main, it’s the one, we do not need anymore after.

We have also to transfer the streams from the Load balancer we want as a NEW MAIN, making him completely empty. No streams on it. We do the same with the Load balancer we want as Main now! We are transferring his streams to another Load balancer the same way we have done before with the Main who has to go.

Step 2:

We do a REMOTE BACKUP after we transferred all streams, and to be on the safe side, a manual Backup to your Computer as well. First you press BACKUP DATABASE (the Backup File will load down to your computer), and direct after, we press REMOTE BACKUP NOW (Backup is loaded into Xtream Codes Server).

Click to Enlarge

Step 3:

We go MANAGE SERVER and at the bottom we see Main-Server-Change again:

Click to Enlarge

Let’s summarize what we have done.

  1. We have the Main and the future MAIN now with no Streams on it
  2. We are still in the process to change our Main Server
  3. Backups are already made (we are with an empty Main Server and an empty Load balancer right now)
  4. We have all our necessary data’s, IP and SSH Root Pass from the Load Balancer we want to put as a New Main. Login data’s ROOT for our CMS, just in case…

After that, we go MANAGE SERVERS-EDIT SERVER-DELETE SERVER.

We delete the Load balancer we want as NEW MAIN. Now we can press the Button Main Server Change.

Click to Enlarge

All necessary follow up’s standing already in Part 1 above!

We are ready to go!

  • Just in case you forgot or out of whatever reason you prefer to do a manually “load” of your saved databank backup, it’s also possible with phpmyadmin later.
  • In case you cannot do, open a ticket, XC is happy to help you out!
  • Don’t forget, Xtream-Codes support is not open 24/7!
  • And one of the most important last steps, after successfully installed the new Main, don’t forget to switch the old former MAIN OFF!
  • Switch him off, or make a new install! and sure you have to sort out your streams later, balancing them again.
  • In all other cases, like you have only 2 Servers, the procedure is similar and not complicated, cause of your limited streams, and clients. This example was written in case you got a few more Load Balancers running.
  • Don’t hesitate to open a ticket in case you are in trouble, Xtream Codes will help you in any cases.

I hope this little How-To helps you guys a little.

Ray

How to Write ESP8266 Firmware from Scratch (using ESP Bare Metal SDK and C Language)

October 7th, 2016 9 comments

CNXSoft: This is a guest post by Alexander Alashkin, software engineer in Cesanta, working on Mongoose Embedded Web Server.

Espressif’s ESP8266 had quite an evolution. Some may even call it controversial. It all started with ESP8266 being a WiFi module with a basic UART interface. But later it became clear that it’s powerful enough for embedded system. It’s essentially a module that can be used for running full-fledged applications.

esp8266-bare-metal-sdk

Espressif realized this as well and released an SDK. As first versions go, it was full of bugs but since has become significantly better. Another SDK was released which offered FreeRTOS ported to ESP. Here, I want to talk about the non-OS version. Of course, there are third-party firmwares which offer support for script language to simplify development (just Google for these), but ESP8266 is still a microchip (emphasis on MICRO) and using script language might be overkill. So what we are going to come back to is the ESP SDK and bare C. You’ll be surprised, it’s easier than it looks!

First steps

To develop firmware you’ll need:

  1. ESP8266 connected to your computer via USB.
    There are a lot of articles how to connect an ESP to a computer. You will need several Dupont cables and a UART-to-USB adapter. If you have a Arduino board you can use it as UART-to-USB. Google “connect esp8266 to computer” – there are a lot of articles about this.
  1. SDK.  I suggest using this one: https://github.com/pfalcon/esp-open-sdk

Download it and follow its readme to build. There is nothing extraordinary in this process, all you need is to install prerequisites and invoke “make”.

In general, this SDK is intended for *nix systems, but there is a port for Windows as well.

In short, to start development you should have an ESP device available as /dev/ttyUSB0 (/dev/ttyACM0 if you use Arduino or COMn in Windows) and the SDK installed in a certain path.

main()

In C int main() is an entry point to a program. But, in the case of ESP the entry point is void user_init(). This function must be used only for initialization, not for long-running logic.

Here an example:

Note, that all we do in user_init is calling the system_init_done_cb API function. This function accepts one parameter, which is a pointer to function which will be called once all system modules will be properly initialized. You can put your initialization code in user_init too, but you can face problems with some system function (like WiFi), just because appropriate modules aren’t initialized yet. Thus, it is better to use system_init_done_cb and perform initialization in the callback function.

Beware of the dog

ESP8266 has a watchdog functionality. And there is NO documented API to control it (there is some undocumented stuff, but out of scope for this tutorial). Its timeout is 1 second.

What does that mean? It means, that you have to return the control flow to system every second, otherwise the device will be rebooted. This code leads to the device reboot:

In general, watchdog is not evil, it helps if the program hangs. And, 1 second is not so small as it sounds. Just keep this fact in mind.

Doing something

Taking what we learned about the watchdog into account, we face an obvious question: where can I run my tasks?

The simplest answer is in timers. The timer API is very simple in ESP.

If the last parameter of the os_time_arm function is 0, the timer callback will be invoked only once. If it’s 1, it will be called repeatedly until the os_timer_disarm is called.

And, finally, we have a place to put our code: The start_timer_cb function.

Our task here is to make an LED blink. Some ESP boards have an LED attached to GPIO16, if your board doesn’t have it, you can attach an LED to any free GPIO.

As you remember start_timer_cb is a timer callback function, and it is called every 5 seconds. On first call on variable is 0 and we set GPIO16 to high – as result LED will be turned on. On second call we set GPIO16 to low – and LED is turned off. And so on and so on.

Building the project

Now it is time to build our project. Let’s say, we have only one source file – main.c. I cannot recommend using makefiles which are used for building examples. They are too complicated and a bit weird. So, I’d suggest to write your own (simple!) makefile.

Here are steps:

  1. Compile main.c to main.o.
    Use xtensa-lx106-elf-gcc compiler which is a part of esp-open-sdk.
  2. Link project.
    Linker to use – the same xtensa-lx106-elf-gcc. Libraries to link with are: c gcc hal m pp phy net80211 wpa main

Also, you need to supply the linker script (.ld file). Choose one from esp-open-sdk that matches the flash size of your device. After this step you’ll have .elf file.

  1. Convert .elf file to .bin

For this, use esptool.py script from esp-open-sdk. Run it like this:

If everything is ok, you should have 3 files in <output dir> with names like 0x00000.bin, 0x11000.bin 0x66000.bin.

Flashing

The final step is to put our firmware onto the device. For this we will use the esptool again, but now we should use write_flash option. Like this:

You should use real filenames from the previous step. And, if everything is still OK, the LED attached to the device will start to blink every 5 seconds.

Next steps

Writing the firmware for any device is a huge topic. Working with ESP8266 is no exception. So, the purpose of this article is only to highlight the direction. There are a lot of different APIs in the ESP8266 SDK: WiFi, GPIO, TCP/UDP and more. Make sure to check out the documentation fully here. It’s also good to check out the examples by firmware providers and esp-open-sdk. If you want to start with an example, check out this one which goes through running Mongoose Embedded Web Server on ESP8266.

OpenCL Accelerated SQL Database with ARM Mali GPU Compute Capabilities

March 20th, 2014 6 comments

We’ve previously seen GPU compute on ARM could improve performance for mobile, automotive and consumer electronics application. GPU compute offload CPU task that can be parallelized to the GPU using APIs such as OpenCL or RenderScript. Most applications that can leverage GPU compute are related to media processing (video decoding, picture processing, audio decoding, image reconigion, etc…), but one thing I did not suspect could be improve is database access. That’s what Tom Gall, Linaro, has achieved in a side project by using OpenCL to accelerate SQLite database operations by around 4 times for a given benchmark.

SQLite Architecture and "Attack Point" for OpenCL Implementation

SQLite Architecture and “Attack Point” for OpenCL Implementation

The hardware used was a Samsung Chromebook with an Exynos 5250 SoC featurig a dual core Cortex A15 processor and an ARM Mali T604 GPU. CPU compute is only possible on ARM Mali T6xx and greater, and won’t work on Mali 400 / 450 GPUs. Other GPU vendors such as Vivante and Imagination technologies also support GPU compute in their latest processors.

As a first implementation, he added an API to SQLite, but eventually the code may be merged inside SQLite, as it would also to accelerate existing applications using SQLite. This type of acceleration will work best with large tables, and parallel tasks.  For benchmark purpose, Tom used a 100,000 row database with 7 columns and ran the same query (select * from testdb) using the SQLite C API and his OpenCL accelerated API. Here are the results:

  • SQLite C API – 420.274 milliseconds
  • OpenCL accelerated SQLite API – 110.289 milliseconds

The first test ran fully on the Cortex A15 cores @ 1.7 GHz, whereas the OpenCL test mostly ran on the Mali-T604 GPU clocked at 533 MHz (TBC). The time includes both the running of the OpenCL kernel and the data transfer from the result buffer.

More work is needed, but that seems like an interesting application for GPU compute in some use cases. I would expect to see no gain for query performed in small tables for example. The modified OpenCL code does not appear to be available right now, but you may want to read GPGPU on ARM presentation at Linaro Connect Asia 2014 for a few more details about the implementation, and if you want to play around OpenCL 1.1 (or OpenGL ES) in Linux on a Chromebook, you can follow those instructions.

Delete Old Revisions to Reduce Time to First Byte for WordPress Blogs

October 19th, 2011 2 comments

I’ve already implemented several steps to improve this blog performance:

Those two work pretty well, but there was still a problem with the Time to First Byte according to http://www.webpagetest.org.

F mark for First Byte TimeIt got an F mark for First Byte Time. Sometimes I would get TTFB (Time To First Byte) of 20 seconds and more. TTFB is synonym of slow back-end processing either because of poorly optimized software or insufficient hardware specs or both. Part of the problem is probably due to my hosting provider (I use a shared hosting) and I sometimes get very high server load in CPanel (e.g. 50 (4 cpus)) whether my blog is running or not.

But I found a blog post explaining how to try to reduce the TTFB for WordPress blog by installing Better Delete Revision plugin in order to reduce the size of the WordPress database. So I’ve decided to give it a try.

Here’s what my database looked like before:

 Wordpress Blog Database StructureThe large table was wp_posts with 8.7MB and 2544 rows.

So I backup the WordPress database first and went to the Dashboard in Settings->Better Delete Revision and clicked on Check Revision Posts. It showed me 1842 redundant posts (old revision of my current blog posts). It clicked on Yes, I would like to Delete them! and run a database optimization.

Here’s what my WordPress database looked like after that:

Optimized WordPress Databasewp_posts is now 1.6 MB (vs 8.6MB) and has 702 rows.

Then I went back to http://www.webpagetest.org to test again. I still get the F mark, but the results still seems somewhat better. It’s quite difficult to judge since I’m on a shared host, there are too many variables I can not control and the results provided by WebPageTest are not consistant since it depends on the server load.

Nevertheless, I don’t think deleting old revisions can be a bad thing, especially if you have been running your blog for a long time and post regularly.

Databases for Linux Embedded Systems: Berkeley DB and SQLite

February 28th, 2011 No comments

Embedded systems often need to use database to store contact information, EPG data and more. Many Linux systems use MySQL, however such a large database management system may not always be appropriate for embedded systems.

Hence, there are lightweight database management systems  implementation that are especially suited to embedded systems by their binary footprint, memory footprint and CPU requirements.

If you want to develop in C in Linux and your requirement is to have no (or little) license to pay in your application, you could consider Oracle Berkeley DB or SQLite among others.

Oracle Berkeley DB (previously Sleepycat Berkeley DB)  is described as follows:

Berkeley DB logoBerkeley DB enables the development of custom data management solutions, without the overhead traditionally associated with such custom projects. Berkeley DB provides a collection of well-proven building-block technologies that can be configured to address any application need from the hand-held device to the datacenter, from a local storage solution to a world-wide distributed one, from kilobytes to petabytes.

Berkeley DB has the following characteristics:

  • Written in C
  • Software Library
  • Key/value API
  • SQL API by incorporating SQLite
  • BTREE, HASH, QUEUE, RECNO storage
  • C++, Java/JNI, C#, Python, Perl, …
  • Java Direct Persistence Layer (DPL) API
  • Java Collections API
  • Replication for High Availability

The latest stable version is  Berkeley DB 11gR2 (11.2.5.1.25).

SQLite is described as follows:

SQLite Embedded DatabaseSQLite is a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. SQLite is the most widely deployed SQL database engine in the world. The source code for SQLite is in the public domain.

SQLite has the following features:

  • Transactions are atomic, consistent, isolated, and durable (ACID) even after system crashes and power failures.
  • Zero-configuration – no setup or administration needed.
  • Implements most of SQL92. (Features not supported)
  • A complete database is stored in a single cross-platform disk file.
  • Supports terabyte-sized databases and gigabyte-sized strings and blobs.
  • Small code footprint: less than 325KiB fully configured or less than 190KiB with optional features omitted.
  • Faster than popular client/server database engines for most common operations.
  • Simple, easy to use API.
  • Written in ANSI-C. TCL bindings included. Bindings for dozens of other languages available separately.
  • Well-commented source code with 100% branch test coverage.
  • Available as a single ANSI-C source-code file that you can easily drop into another project.
  • Self-contained: no external dependencies.
  • Cross-platform: Unix (Linux and Mac OS X), OS/2, and Windows (Win32 and WinCE) are supported out of the box. Easy to port to other systems.
  • Sources are in the public domain. Use for any purpose.
  • Comes with a standalone command-line interface (CLI) client that can be used to administer SQLite databases.

The latest version is SQLite 3.7.5.

Whether you choose one of the other you’ll have to consider:

  1. SQLite support SQL natively, has a low memory footprint (190KB minimal / 325 KB full features) and is open source (GPL License) . However, some extensions are not open source and require a license such as SQLite Encryption Extension. (2000 USD one-time fee payable to hwaci).
  2. Berkeley DB supports SQL thru SQLite, has a low memory footprint (350KB minimal config) a has a dual license GPL/Commercial. Berkeley also have a XML (C++) and Java Edition.

There are many parameters to consider, but generally, in most cases you would probably go with SQLite unless you need encryption, do not want to pay the 2000 USD  license fee and your code can be open-sourced. In the later case, Berkeley DB is probably the best choice.

In the next posts, I’ll explain how to cross-compile SQLite and Berkeley DB for ARM and MIPS targets.