Showing posts with label Common Core. Show all posts
Showing posts with label Common Core. Show all posts

Monday, February 23, 2015

Sentopiary

Sentopiary ($4.99) is a new and very interesting app from one of the creators of Popplet (a favorite of mine) that has nice potential for use in targeting complex syntax. With Sentopiary, you can explore sentence building with students within two modes:
-The "create a sentence" mode lets you build leveled sentences (e.g. at level 1 with articles, nouns, pronouns and present-tense verbs whereas when you increase difficulty, tenses, adjectives, prepositional phrases and adverbs come into play)
-Similarly, the leveled "challenge" mode asks you to recreate a sentence using the above grammatical categories.



Check out this video to see how it works.



SENTOPIARY - Create - A Hungry Bull from eeiioo on Vimeo.

Sentopiary is fun and engaging, as well as aligning with research demonstrating the importance of phrase elaboration for development of literate language. Kahmi (2014) states "There are three basic ways to make sentences more complex: (a) noun phrase elaboration, (b) verb phrase elaboration, and (c) conjoined and embedded clauses." This app includes contexts for (a) and (b), but not (c)- so I wrote to the developer and requested that they consider this! They promised to do so for future updates.

Kahmi also writes "My general principle for targeting complex syntax in therapy is this: Target the meanings and/or functions conveyed by the syntactic structure rather than the structure itself"- i.e. to make the intervention pragmatically appropriate. This app would seem to contradict that principle with its emphasis on labeling the structures. However, the metalinguistic aspects do align with curriculum goals (e.g. identifying nouns and verbs) and clinicians can easily incorporate strategies to emphasize meaning as I often recommend "around the app" such as:
-incorporating sketching or visualization strategies about constructed sentences and at the same time increasing bombardment/elicitation of the structure.
-utilizing the app's potential for absurdity in constructing sentences.


So check out Sentopiary- looks like a great tool for a wide age range!

Friday, June 13, 2014

World's Worst Pet-Vocabulary

I am often asked about apps that build vocabulary and am at a loss for an easy answer. Are you looking for apps for basic or more advanced vocabulary? Do you mean semantics instead? There are many terrific apps that can be used to build semantic networks, categorization and description (take Bag Game, Naming TherAppy and Describe it to Me as a few examples), and though these skills provide the foundation for expanding vocabulary, this is sometimes not what the person was asking for.

What I often try to do is guide people toward context and repeated exposure, two principles of vocabulary development espoused in the terrific Bringing Words To Life by Isabel Beck (more on this in a minute), and again my answers aren't so straightforward:
-Read books and use Kidspiration or Inspiration to map or categorize vocabulary in context
-Keep a vocabulary journal in Evernote
-etc.

While the above approach aligns with my philosophy about apps, i.e. that they are tools to be used as part of a bigger context, process, sequence of pre-/post- activities etc, it was really nice to find an app that can be used to develop vocabulary by providing content and context within an engaging package. I'm speaking of World's Worst Pet-Vocabulary (FREE!) from the folks at Curriculum Associates, which was designed specifically to develop Tier 2 Vocabulary (another concept discussed in Beck's book)- those high-frequency words used by "mature" speakers.

World's Worst Pet contains tons of content that can be worked with across 5 grade levels (grades 4-8 are suggested but with scaffolding, this can be used with younger grades in the same way kindergarteners learn Tier 2 words using Beck's approach). The concept is that Snargg, the world's worst pet, keeps bolting and you need to interact with vocabulary to "chase" him through a particular setting (e.g. a bakery) and retrieve him. You move vocabulary words to complete tasks such as matching words to the main idea of "book" titles, finding synonyms or antonyms, responding to questions/categorizing around the vocab words, or identifying examples/nonexamples. Your choices guide Snargg via steps, catapult, rocket, etc in engaging ways, and the app follows a commonly-enjoyed structure of encouraging accurate completion of each level by earning "pupcakes."



My favorite part of the app is that the sets of 10 vocabulary words are contextual, related, relevant to real life, and supported by student-friendly definitions (more Beck). The sets include topics such as words about performing (audience, popular, melody, public), words about going places (journey, roam, guide, proceed), and words about the mall (merchandise, extravagant, desire, vendor).




The levels and sets give you a place to start for students with weak vocabulary and a structure that might serve you (and them) over several years of instruction. You can challenge the students to complete each set perfectly, and the pace of the game will give you a lot of "air time" to discuss each word and provide additional models. Each level can be a doorway to activities over several weeks, including a suggested writing activity, and can be a topic of consultation with teachers so that there are multiple exposures to the words (as building vocabulary only in the "speech room" can have limited efficacy). EVEN BETTER, you can flip this model and use this app as the context for in-class programming, thereby facilitating vocabulary development in the classroom. Beck's book will give you many more ideas for wordplay activities to provide students with multiple exposure to words!

Let me know what you think of this app-it's one of my favorite finds this year. Thank you to Richard Byrne of iPad Apps for School for pointing it out.

Tuesday, January 14, 2014

Timeline

The folks at ReadWriteThink are doing a nice job turning a number of their language-based web tools into apps- that are free, to boot. Brought to us by the International Reading Association, these apps including Trading Cards and RWT Timeline make great use of some of the features the iPad has to offer. Namely, it's great when an app lets you set up profiles for students or groups for saving work, as we often don't complete activities with students in one session, and can focus more on the language opportunities inherent in creative processes if you can come back and complete work. These apps also allow you to add pictures from the camera roll (which, if saved from the internet, provide endless visual contexts) and share in various formats, facilitating consultation and collaboration. Additionally, the RWT apps have a mature look and feel that fills in some of the dearth of apps geared toward middle-high school students.

What can you do with a timeline, or more specifically the RWT Timeline app? Break down any sequence, a story line, or historical content. It's very simple to use, and again, FREE, so I hope you will check it out!


Common Core Connection:
-CCSS.ELA-Literacy.SL.6.2 Interpret information presented in diverse media and formats (e.g., visually, quantitatively, orally) and explain how it contributes to a topic, text, or issue under study. 
-CCSS.ELA-Literacy.SL.6.4 Present claims and findings, sequencing ideas logically and using pertinent descriptions, facts, and details to accentuate main ideas or themes; use appropriate eye contact, adequate volume, and clear pronunciation.

Thursday, January 9, 2014

Cloudart

Wordle has long been hailed as a great visual way to work with language. Text or a web address (URL) can be pasted into this tool and then a word "cloud" is displayed, with emphasis on more frequently used words. In this way, the tool can be used with any digital text passage or content related website, and the visual that results can be used to develop vocabulary and skills of identifying main idea.

Worldle is not iPad-friendly due to its basis in Java, but Cloudart ($.99) is a great translation of this tool to the iPad platform.


Cloudart has a very simple interface, but with many features that make it useful! Simply copy a block of text via Safari or perhaps from an electronic book in iBooks (many samples of novels commonly used in educational settings are free), or a text-based URL (e.g. the Wikipedia article on any academic topic), and the app will generate your cloud. You can then customize it visually, and even tap to remove or change the emphasis of irrelevant/key words. The cloud can be saved as an image to the camera roll or emailed. 

Common Core Connection:
CCSS.ELA-Literacy.RL.5.2 Determine a theme of a story, drama, or poem from details in the text, including how characters in a story or drama respond to challenges or how the speaker in a poem reflects upon a topic; summarize the text.
CCSS.ELA-Literacy.RI.5.2 Determine two or more main ideas of a text and explain how they are supported by key details; summarize the text.
CCSS.ELA-Literacy.RI.5.4 Determine the meaning of general academic and domain-specific words and phrases in a text relevant to a grade 5 topic or subject area.

Friday, January 3, 2014

Write About This

Write About This is an app designed by educators that uses the technique of providing photo prompts as a way to foster writing skills. Visuals such as those in this app give students a place to start with their language, and the prompts that come with the app provide a context to develop writing with the use of strategies such as story grammar mapping or use of expository text structure (list, sequence, description, etc).

What I like particularly about the app is that the authors didn't stop at simply substituting printed photo prompts with an app version, but incorporated a few key features of the iPad to make this a more powerful resource. While students can type onscreen (and text or text/images can be sent via email to continue developing the work in class), they also can "Publish with Audio" and save their visual work with an audio narration as a movie on the Camera Roll. Additionally, pictures taken with the iPad or saved to the Camera Roll can be used in the app to create customized photo prompts. This would give clinicians or teachers the opportunity to work with a group of students to select a great photo and create a prompt for another group of students to use. Presenting the recorded audio between groups will also allow you to work on auditory comprehension.



Write About This is available in a full-featured free version with a limited number of prompts (about 50 as opposed to 375). I was excited to see that the authors are developing another app called Tell About This, which takes the writing component out and focuses on oral language prompts for younger students.

Tuesday, May 14, 2013

Enjoy Warmer Weather with Ice Cream Truck

Apps that go beyond a simple screen and involve people around a play space have great potential for interactivity and "speechieness," or using language in the context of the app.  Ice Cream Truck ($1.99) is one of those apps- it's somewhat in the vein of Toca Store but it has additional contexts. With Ice Cream Truck, your young students can "drive" the iPad around a space (incorporating augmented reality through the camera) and decide where to park for their customers, using the horn and music to signal it's time to buy ice cream!


There are several modes that interact with each other, almost like spaces within the truck- stack ice cream scoops on one screen, mix yogurt on another, and bring it all to the cash register screen. Overall, a great context for descriptive language, requesting, and all sorts of language structures, as well as building play skills.


Common Core Connection:
CCSS.ELA-Literacy.SL.K.4 Describe familiar people, places, things, and events and, with prompting and support, provide additional detail.

Tuesday, May 7, 2013

Celebrate Speech with a "Silent Film"

Another way to promote the awareness of the importance of speech during Better Speech and Hearing Month (or other times) is to explore the idea of silent film. Many kids are unaware that there ever were films that have no sound, and any language-neutral visual can be a great context for having kids generate language. I found this treasure, "Unspoken Content: Silent Film in the ESL Classroom" just in a quick search about this topic.  The article describes how "The Painted Lady," available, like many films, on YouTube (or using PlayTube to cache if YouTube is blocked), can be used to target narrative and metalinguistic awareness.

I mention all this primarily because Google has recently unveiled a cool new resource: The Peanut Gallery. This website (you cannot access this on iPad, and it only works in Google Chrome) allows you to dictate language that will appear as "silent film" titles over any of a selection of over 12 old movie clips. The site uses Google's "Web Speech API" and is remarkably accurate. Just speak, and it will convert your speech to text within titles over the movie clip, which is then saved and shareable.


You can see one of my attempts at it here.

The Language Lens on this site, then, is that it provides you with many contexts to have students analyze situations (characters, settings, ongoing events) and generate narration related to this, which employs the interpretation of body language and emotions, as well as, potentially, metalinguistics such as sarcasm and understatement. 

You will need to insert test dialogue (e.g. "Action" or "Oh no!") just to make the film proceed at first, so that kids get the context and can plan dialogue for a second or third try (or more), as improvising may be too difficult without your scaffolding.  

Common Core Connection
CCSS.ELA-Literacy.RL.4.2 Determine a theme of a story, drama, or poem from details in the text; summarize the text.

Wednesday, May 1, 2013

Open Better Speech and Hearing Month with Sound Uncovered

Sound Uncovered is a gorgeous free app for iPad that you can use as a context to introduce Better Speech and Hearing Month to your students. The app explores the science of sound through a variety of interactives that you can use to bridge discussions about hearing conservation. In addition, the app itself provides a lot of content that you can use for mapping expository text, as it presents much information about how sound works. This is also a key curriculum area in "energy, light and sound" type units of science.

A sampling of the activities:
Which Car Would You Buy- Presents sounds produced by cars and car parts and links these to purchasing decisions.
What's Making This Sound? and Sounds Like?- Have you listen to sounds or people's descriptions of a sound in order to guess what they are talking about (inferences!).
Eyes vs. Ears- talk about "listening with your eyes" while exploring how visual input helps us understand sounds .
Stop Me If You've Heard This One- demonstrates that you can't talk and listen at the same time! I have a lot of students that can benefit from that one...



That's where I heard it, and the age is right!

Ultimately, Sound Uncovered is probably best for older (upper elementary, MS/HS) and high-functioning students, but the interactives could be adapted for young students. Exploratorium, the interactive museum of science, art and perception in San Francisco, offers a similar open-ended app called Color Uncovered, which also looks to be a good context for eliciting language and description.

Common Core Connection:
CCSS.ELA-Literacy.SL.6.1d Review the key ideas expressed and demonstrate understanding of multiple perspectives through reflection and paraphrasing.

Tuesday, March 12, 2013

mARch: Augmenting with Aurasma, Part 3- Using Text and Sharing your Work.

The last several posts here focused on using the Aurasma app to "augment," or layer discoverable visual information, over an image, specifically a book page. These same steps can be used to augment other materials- flash cards, posters, bulletin boards, printed images or student-created art. Part one showed how to use Aurasma's library of images and animations, and part 2 gave steps for using your own images and video as "auras."

In my previous series, I showed how QR codes could be used to display text for language stimulation.  This can be done with Aurasma, as well. However, while you can easily generate a QR code that displays text (I need to update this as I now think other QR generators are easier to use than Kaywa), Aurasma is image-based. So, you have to make your text into an image!  This is easy enough, as you can use a drawing app to write text and save that as an image to the camera roll, or use another app and take a screenshot of the text.

Here's how you do it:


1. Use a drawing app such as Doodle Buddy to write single words to be displayed as images. For example, you can use a conjunction such as "after" to promote complex sentence formulation in context. You could also use vocabulary words. Doodle Buddy lets you save the image, but if you want to write longer text, you could just use an app such as Notes, and take a screenshot.


2. Follow the steps in previous posts to make the text an aura.

As stated in the opening posts, when you make an aura it is available in that version of Aurasma, on that device.  Auras can be shared between devices by emailing them as a link, however. These steps are a little complicated and were made more so in the newer version of Aurasma, but I thought I would share them anyway:



You would want to keep auras private on your own password-protected device, rather than sharing, if they contain images and video of students.

That's it for Aurasma! I look forward to sharing a few other apps this month to show you how augmented reality can be useful in your practice, but first, a Common Core Connection related to this post: 
CCSS.ELA-Literacy.SL.3.6 Speak in complete sentences when appropriate to task and situation in order to provide requested detail or clarification.

Thursday, March 7, 2013

mARch: Augmenting with Aurasma, Part 2- Making it Your Own!

In the last post, we looked at how to use Aurasma's own library of images, animations and 3D models to create an "aura"- an image overlay that appears when you scan a visual material (usually, another image).

Like any great "Speechie" app, Aurasma allows you to use your own images or even videos as auras. As always, this equals limitless contexts for applying the app!

In this post, we will look at how to use materials from your photos app (aka camera roll and photo album) in the context of augmenting a visual material such as a book. It will be important that you have read Part 1, as I am not going to go through each step. I will just be saying how it is different to create an aura from your own photos or videos.


If you are not sure how to do this step, see this post about Saving Images to iPad.


OR, another option is to create your own images or video using the camera. If you want to augment a material with kids' own drawing or writing, shoot a picture of it!


Note that, as stated, an extra step is involved when using your own images or video- you have to name the image/video file (the overlay), and the aura (I usually keep it the same name). Also, if using a photo or video of a child, keep the file private, not public, when you create the aura (see last post's step 6)


The rest of the steps work the same as in Part 1!

If you are creating a video aura of speaking about a book connection, as I modeled above, a Common Core Connection for you:
CCSS.ELA-Literacy.SL.4.4 Report on a topic or text, tell a story, or recount an experience in an organized manner, using appropriate facts and relevant, descriptive details to support main ideas or themes; speak clearly at an understandable pace.

Next post, in wrapping up this look at Aurasma, we'll be looking at how to add an aura that displays text (since auras are images, can you guess how?) and how to share auras to other devices.

Tuesday, March 5, 2013

mARch: Augmenting with Aurasma, Part 1.

In yesterday's post, I introduced the topic of augmented reality (AR), which can be used to add visuals such as animation, images, video and text to many contexts. We'll be looking at some stand-alone apps that do their specific AR thing, but I wanted to start with Aurasma, the app that was featured in yesterday's video.  This video showed Aurasma applied in schools to give kids a way to link, say, a bulletin board to related images and video.  Aurasma is actually pretty easy to use!

Note: Aurasma is available for free for iPhone, iPod and iPad 2 and above. This is because a camera is essential to the function of this app and many of the others this month. My apologies to readers with an iPad 1. This app is also available for Android, but I can't attest to how it works.

So, first, a context. Let's say you read a picture book with your kids, which I hope you do occasionally because there are so many skills you can build around picture books. What if you could then (after reading) make the picture book an interactive experience with the students, allowing them to scan the book to view, discuss, and respond to images, text, or video associated with the book? What if they could record videos and make these other visuals pop up themselves? They can.

In this post, we are first going to see the steps of creating just one "aura" with the picture book The Big Orange Splot, by Daniel Manus Pinkwater. The steps flow really easily once you see how it works. The Big Orange Splot is a great story about individuality. A man lives on a "neat street" where everything is the same. One day, a seagull drops a can of orange paint on his roof. Instead of just cleaning it off, he allows it to inspire him to make all kinds of interesting changes to his house. His neighbors are at first outraged, then experience the same inspiration.

In this series of steps, you will see how to make an image aura from Aurasma's own library of images "float" above a book page when the page is scanned. Specifically, through these steps a seagull "aura" is accessed when the book's cover is scanned. What could you do with that? It sure is a fun way to prompt retelling and understanding of an initiating event in a narrative. You can think of doing the same type of thing for another visual material, such as a printed or drawn picture, a poster, a flashcard, etc...

When you open Aurasma the camera will be activated:









Give it a try! These 8 steps seem like a lot at first, but you'll see they are a quick, logical series after practicing a few times. 

See my other posts detailing other features of Aurasma:

Oh, and by the way, there's a Common Core Connection:
CCSS.ELA-Literacy.SL.2.2 Recount or describe key ideas or details from a text read aloud or information presented orally or through other media.

Wednesday, August 29, 2012

Google Earth and Cracking Curriculum Content

It's exciting to have the continued opportunity to contribute to the ASHA Leader for a few of their APP-titudes columns.  It's a different kind of writing, and I have to go back to stuff I did not learn when completing my journalism degree at BU, and that Magazine Journalism class I never took (I never really liked asking people, you know, questions), but it seems to come out ok after editorial assistance.

In my piece that just came out this week, I discuss apps that clinicians can use to facilitate the daunting process of making your therapy educationally relevant, meaning that the context mirrors or parallels what is going on in the classroom setting.  This is a huge passion of mine, though I feel I must clarify two possible misconceptions.  First of all, I am not talking about SLPs being tutors of classroom subjects.  Rather, the classroom content can be used as a context or target to target goals and strategies: e.g. categorization, description, use of graphic organizers, visualization, and so on. Secondly, although this topic is important, I realized as I saw my column in an issue filled with information about Common Core, it wasn't really about Common Core, as (for now) those standards are only in Language Arts and Math.  But the information I shared can be about Common Core, and I decided where possible that I would include a Common Core Connection in my posts to link resources shared here to relevant Common Core standards, as I know many public school SLPs are struggling to integrate those.

In my column, I wrote, "In addition to the built-in maps app, Google Earth, available for iOS, Android, and any desktop or laptop machine, provides an extraordinary view of any geographic region. Google Earth allows clinicians to target spatial concepts, descriptive language, categories, and reading comprehension, all by zooming in on locations and viewing photos in the Panoramio layer. The stunning interactive 3D imagery available on the desktop version will soon be available on mobile devices as well."

These columns are written somewhat ahead of time, and I wanted to let you know (and see) that the free Google Earth app NOW has 3D imagery for select cities (with more to come): Boston (yay), Los Angeles, Seattle, Denver, San Francisco, Geneva, and Rome.

A 3D view of Boston you can interact with via touch.  The new Tour Guide feature makes Google Earth even more navigable with "playable" (and pausable) views of landmarks and key geographic features. Panoramio Photos provide you with countless visual stimuli to explore, describe and discuss with students.

The new version also comes with a super-handy tutorial that opens on launch (later it can be re-accessed anytime under the "wrench" icon) that can provide a nice lesson in following directions:


This visual/touch tutorial shows you how to navigate in Google Earth for iPad, and also gives you a good opportunity to target spatial concepts including cardinal directions. Again, bring it up anytime under the "wrench" icon.

I really hope you enjoy this great app.  The only caveats I can share are that the 3D imagery is not available on iPad 1, and that I sometimes get a message that "Google Earth is running low on memory" but the app continues to function.

Common Core Connection
This app can be used, with your verbal prompting and scaffolding, to target standards such as:
SL.3.3. Ask and answer questions about information from a speaker, offering appropriate elaboration and detail.



 
.