I'd break your testing into three steps:
1. Planning and deciding what data you want to end up with post-testing.
2. Perform the tests, using data collection methods to make sure you're capturing what you wanted in step 1.
3. Deciding how you'll interpret the data, and any changes you'll make to the game as a result.
I think it's useful to consider first the end result that you want. Why are you doing the tests? Are you hoping to find a bunch of bugs? Maybe you want to test to see if your game has that 'fun factor'? Are you looking to start them from the beginning like an ordinary player would, or would you rather they focus test a specific level or game mechanic? In most cases it's probably all of the above, but I think there's still a lot of value in knowing what you hope to end up with at the end of the testing.
These questions all help you develop a testing structure. If you're looking to see how easy the game is to pick up for a beginner, then don't tell them anything and let them loose. In fact you probably don't want to tell them much in general unless you have a reason for doing so. The less preconceptions they have the better they can judge your game on its own merits. Maybe it's just a preliminary round of bug testing you're after, in this case I might just tell the players to go nuts, and to break the game if they can.
All of this could be a poor use of time though if you don't make sure you have some way to capture what they're doing whether it's watching over their shoulder and taking notes, screen capping them, or recording what actions they take from within the game itself. Just remember that your small number of testers aren't necessarily indicative of your playerbase at large, and that they are there to help you, not to design your game. Don't feel like you have to implement all of the changes they might suggest.